Jan 20 18:23:04 localhost kernel: Linux version 5.14.0-661.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026
Jan 20 18:23:04 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 20 18:23:04 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 18:23:04 localhost kernel: BIOS-provided physical RAM map:
Jan 20 18:23:04 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 20 18:23:04 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 20 18:23:04 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 20 18:23:04 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 20 18:23:04 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 20 18:23:04 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 20 18:23:04 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 20 18:23:04 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 20 18:23:04 localhost kernel: NX (Execute Disable) protection: active
Jan 20 18:23:04 localhost kernel: APIC: Static calls initialized
Jan 20 18:23:04 localhost kernel: SMBIOS 2.8 present.
Jan 20 18:23:04 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 20 18:23:04 localhost kernel: Hypervisor detected: KVM
Jan 20 18:23:04 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 20 18:23:04 localhost kernel: kvm-clock: using sched offset of 3128454352 cycles
Jan 20 18:23:04 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 20 18:23:04 localhost kernel: tsc: Detected 2799.998 MHz processor
Jan 20 18:23:04 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 20 18:23:04 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 20 18:23:04 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 20 18:23:04 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 20 18:23:04 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 20 18:23:04 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 20 18:23:04 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 20 18:23:04 localhost kernel: Using GB pages for direct mapping
Jan 20 18:23:04 localhost kernel: RAMDISK: [mem 0x2d426000-0x32a0afff]
Jan 20 18:23:04 localhost kernel: ACPI: Early table checksum verification disabled
Jan 20 18:23:04 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 20 18:23:04 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 18:23:04 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 18:23:04 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 18:23:04 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 20 18:23:04 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 18:23:04 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 20 18:23:04 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 20 18:23:04 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 20 18:23:04 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 20 18:23:04 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 20 18:23:04 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 20 18:23:04 localhost kernel: No NUMA configuration found
Jan 20 18:23:04 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 20 18:23:04 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Jan 20 18:23:04 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 20 18:23:04 localhost kernel: Zone ranges:
Jan 20 18:23:04 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 20 18:23:04 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 20 18:23:04 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 18:23:04 localhost kernel:   Device   empty
Jan 20 18:23:04 localhost kernel: Movable zone start for each node
Jan 20 18:23:04 localhost kernel: Early memory node ranges
Jan 20 18:23:04 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 20 18:23:04 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 20 18:23:04 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 20 18:23:04 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 20 18:23:04 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 20 18:23:04 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 20 18:23:04 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 20 18:23:04 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 20 18:23:04 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 20 18:23:04 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 20 18:23:04 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 20 18:23:04 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 20 18:23:04 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 20 18:23:04 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 20 18:23:04 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 20 18:23:04 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 20 18:23:04 localhost kernel: TSC deadline timer available
Jan 20 18:23:04 localhost kernel: CPU topo: Max. logical packages:   8
Jan 20 18:23:04 localhost kernel: CPU topo: Max. logical dies:       8
Jan 20 18:23:04 localhost kernel: CPU topo: Max. dies per package:   1
Jan 20 18:23:04 localhost kernel: CPU topo: Max. threads per core:   1
Jan 20 18:23:04 localhost kernel: CPU topo: Num. cores per package:     1
Jan 20 18:23:04 localhost kernel: CPU topo: Num. threads per package:   1
Jan 20 18:23:04 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 20 18:23:04 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 20 18:23:04 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 20 18:23:04 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 20 18:23:04 localhost kernel: Booting paravirtualized kernel on KVM
Jan 20 18:23:04 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 20 18:23:04 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 20 18:23:04 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 20 18:23:04 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 20 18:23:04 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 20 18:23:04 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 20 18:23:04 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 18:23:04 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64", will be passed to user space.
Jan 20 18:23:04 localhost kernel: random: crng init done
Jan 20 18:23:04 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 20 18:23:04 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 20 18:23:04 localhost kernel: Fallback order for Node 0: 0 
Jan 20 18:23:04 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 20 18:23:04 localhost kernel: Policy zone: Normal
Jan 20 18:23:04 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 20 18:23:04 localhost kernel: software IO TLB: area num 8.
Jan 20 18:23:04 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 20 18:23:04 localhost kernel: ftrace: allocating 49417 entries in 194 pages
Jan 20 18:23:04 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 20 18:23:04 localhost kernel: Dynamic Preempt: voluntary
Jan 20 18:23:04 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 20 18:23:04 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 20 18:23:04 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 20 18:23:04 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 20 18:23:04 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 20 18:23:04 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 20 18:23:04 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 20 18:23:04 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 20 18:23:04 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 18:23:04 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 18:23:04 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 20 18:23:04 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 20 18:23:04 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 20 18:23:04 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 20 18:23:04 localhost kernel: Console: colour VGA+ 80x25
Jan 20 18:23:04 localhost kernel: printk: console [ttyS0] enabled
Jan 20 18:23:04 localhost kernel: ACPI: Core revision 20230331
Jan 20 18:23:04 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 20 18:23:04 localhost kernel: x2apic enabled
Jan 20 18:23:04 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 20 18:23:04 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 20 18:23:04 localhost kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Jan 20 18:23:04 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 20 18:23:04 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 20 18:23:04 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 20 18:23:04 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 20 18:23:04 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 20 18:23:04 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 20 18:23:04 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 20 18:23:04 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 20 18:23:04 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 20 18:23:04 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 20 18:23:04 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 20 18:23:04 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 20 18:23:04 localhost kernel: x86/bugs: return thunk changed
Jan 20 18:23:04 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 20 18:23:04 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 20 18:23:04 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 20 18:23:04 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 20 18:23:04 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 20 18:23:04 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 20 18:23:04 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 20 18:23:04 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 20 18:23:04 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 20 18:23:04 localhost kernel: landlock: Up and running.
Jan 20 18:23:04 localhost kernel: Yama: becoming mindful.
Jan 20 18:23:04 localhost kernel: SELinux:  Initializing.
Jan 20 18:23:04 localhost kernel: LSM support for eBPF active
Jan 20 18:23:04 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 18:23:04 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 20 18:23:04 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 20 18:23:04 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 20 18:23:04 localhost kernel: ... version:                0
Jan 20 18:23:04 localhost kernel: ... bit width:              48
Jan 20 18:23:04 localhost kernel: ... generic registers:      6
Jan 20 18:23:04 localhost kernel: ... value mask:             0000ffffffffffff
Jan 20 18:23:04 localhost kernel: ... max period:             00007fffffffffff
Jan 20 18:23:04 localhost kernel: ... fixed-purpose events:   0
Jan 20 18:23:04 localhost kernel: ... event mask:             000000000000003f
Jan 20 18:23:04 localhost kernel: signal: max sigframe size: 1776
Jan 20 18:23:04 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 20 18:23:04 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 20 18:23:04 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 20 18:23:04 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 20 18:23:04 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 20 18:23:04 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 20 18:23:04 localhost kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Jan 20 18:23:04 localhost kernel: node 0 deferred pages initialised in 7ms
Jan 20 18:23:04 localhost kernel: Memory: 7763768K/8388068K available (16384K kernel code, 5797K rwdata, 13916K rodata, 4200K init, 7192K bss, 618360K reserved, 0K cma-reserved)
Jan 20 18:23:04 localhost kernel: devtmpfs: initialized
Jan 20 18:23:04 localhost kernel: x86/mm: Memory block size: 128MB
Jan 20 18:23:04 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 20 18:23:04 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 20 18:23:04 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 20 18:23:04 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 20 18:23:04 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 20 18:23:04 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 20 18:23:04 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 20 18:23:04 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 20 18:23:04 localhost kernel: audit: type=2000 audit(1768933382.877:1): state=initialized audit_enabled=0 res=1
Jan 20 18:23:04 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 20 18:23:04 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 20 18:23:04 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 20 18:23:04 localhost kernel: cpuidle: using governor menu
Jan 20 18:23:04 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 20 18:23:04 localhost kernel: PCI: Using configuration type 1 for base access
Jan 20 18:23:04 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 20 18:23:04 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 20 18:23:04 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 20 18:23:04 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 20 18:23:04 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 20 18:23:04 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 20 18:23:04 localhost kernel: Demotion targets for Node 0: null
Jan 20 18:23:04 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 20 18:23:04 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 20 18:23:04 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 20 18:23:04 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 20 18:23:04 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 20 18:23:04 localhost kernel: ACPI: Interpreter enabled
Jan 20 18:23:04 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 20 18:23:04 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 20 18:23:04 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 20 18:23:04 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 20 18:23:04 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 20 18:23:04 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 20 18:23:04 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [3] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [4] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [5] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [6] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [7] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [8] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [9] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [10] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [11] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [12] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [13] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [14] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [15] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [16] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [17] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [18] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [19] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [20] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [21] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [22] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [23] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [24] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [25] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [26] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [27] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [28] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [29] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [30] registered
Jan 20 18:23:04 localhost kernel: acpiphp: Slot [31] registered
Jan 20 18:23:04 localhost kernel: PCI host bridge to bus 0000:00
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 20 18:23:04 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 20 18:23:04 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 20 18:23:04 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 20 18:23:04 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 20 18:23:04 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 20 18:23:04 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 20 18:23:04 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 20 18:23:04 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 20 18:23:04 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 20 18:23:04 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 20 18:23:04 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 20 18:23:04 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 20 18:23:04 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 20 18:23:04 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 20 18:23:04 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 20 18:23:04 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 20 18:23:04 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 20 18:23:04 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 20 18:23:04 localhost kernel: iommu: Default domain type: Translated
Jan 20 18:23:04 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 20 18:23:04 localhost kernel: SCSI subsystem initialized
Jan 20 18:23:04 localhost kernel: ACPI: bus type USB registered
Jan 20 18:23:04 localhost kernel: usbcore: registered new interface driver usbfs
Jan 20 18:23:04 localhost kernel: usbcore: registered new interface driver hub
Jan 20 18:23:04 localhost kernel: usbcore: registered new device driver usb
Jan 20 18:23:04 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 20 18:23:04 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 20 18:23:04 localhost kernel: PTP clock support registered
Jan 20 18:23:04 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 20 18:23:04 localhost kernel: NetLabel: Initializing
Jan 20 18:23:04 localhost kernel: NetLabel:  domain hash size = 128
Jan 20 18:23:04 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 20 18:23:04 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 20 18:23:04 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 20 18:23:04 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 20 18:23:04 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 20 18:23:04 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 20 18:23:04 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 20 18:23:04 localhost kernel: vgaarb: loaded
Jan 20 18:23:04 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 20 18:23:04 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 20 18:23:04 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 20 18:23:04 localhost kernel: pnp: PnP ACPI init
Jan 20 18:23:04 localhost kernel: pnp 00:03: [dma 2]
Jan 20 18:23:04 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 20 18:23:04 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 20 18:23:04 localhost kernel: NET: Registered PF_INET protocol family
Jan 20 18:23:04 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 20 18:23:04 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 20 18:23:04 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 20 18:23:04 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 20 18:23:04 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 20 18:23:04 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 20 18:23:04 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 20 18:23:04 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 18:23:04 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 20 18:23:04 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 20 18:23:04 localhost kernel: NET: Registered PF_XDP protocol family
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 20 18:23:04 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 20 18:23:04 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 20 18:23:04 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 20 18:23:04 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 80097 usecs
Jan 20 18:23:04 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 20 18:23:04 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 20 18:23:04 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 20 18:23:04 localhost kernel: ACPI: bus type thunderbolt registered
Jan 20 18:23:04 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 20 18:23:04 localhost kernel: Initialise system trusted keyrings
Jan 20 18:23:04 localhost kernel: Key type blacklist registered
Jan 20 18:23:04 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 20 18:23:04 localhost kernel: zbud: loaded
Jan 20 18:23:04 localhost kernel: integrity: Platform Keyring initialized
Jan 20 18:23:04 localhost kernel: integrity: Machine keyring initialized
Jan 20 18:23:04 localhost kernel: Freeing initrd memory: 87956K
Jan 20 18:23:04 localhost kernel: NET: Registered PF_ALG protocol family
Jan 20 18:23:04 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 20 18:23:04 localhost kernel: Key type asymmetric registered
Jan 20 18:23:04 localhost kernel: Asymmetric key parser 'x509' registered
Jan 20 18:23:04 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 20 18:23:04 localhost kernel: io scheduler mq-deadline registered
Jan 20 18:23:04 localhost kernel: io scheduler kyber registered
Jan 20 18:23:04 localhost kernel: io scheduler bfq registered
Jan 20 18:23:04 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 20 18:23:04 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 20 18:23:04 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 20 18:23:04 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 20 18:23:04 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 20 18:23:04 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 20 18:23:04 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 20 18:23:04 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 20 18:23:04 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 20 18:23:04 localhost kernel: Non-volatile memory driver v1.3
Jan 20 18:23:04 localhost kernel: rdac: device handler registered
Jan 20 18:23:04 localhost kernel: hp_sw: device handler registered
Jan 20 18:23:04 localhost kernel: emc: device handler registered
Jan 20 18:23:04 localhost kernel: alua: device handler registered
Jan 20 18:23:04 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 20 18:23:04 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 20 18:23:04 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 20 18:23:04 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 20 18:23:04 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 20 18:23:04 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 20 18:23:04 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 20 18:23:04 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-661.el9.x86_64 uhci_hcd
Jan 20 18:23:04 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 20 18:23:04 localhost kernel: hub 1-0:1.0: USB hub found
Jan 20 18:23:04 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 20 18:23:04 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 20 18:23:04 localhost kernel: usbserial: USB Serial support registered for generic
Jan 20 18:23:04 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 20 18:23:04 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 20 18:23:04 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 20 18:23:04 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 20 18:23:04 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 20 18:23:04 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 20 18:23:04 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 20 18:23:04 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-20T18:23:03 UTC (1768933383)
Jan 20 18:23:04 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 20 18:23:04 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 20 18:23:04 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 20 18:23:04 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 20 18:23:04 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 20 18:23:04 localhost kernel: usbcore: registered new interface driver usbhid
Jan 20 18:23:04 localhost kernel: usbhid: USB HID core driver
Jan 20 18:23:04 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 20 18:23:04 localhost kernel: Initializing XFRM netlink socket
Jan 20 18:23:04 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 20 18:23:04 localhost kernel: Segment Routing with IPv6
Jan 20 18:23:04 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 20 18:23:04 localhost kernel: mpls_gso: MPLS GSO support
Jan 20 18:23:04 localhost kernel: IPI shorthand broadcast: enabled
Jan 20 18:23:04 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 20 18:23:04 localhost kernel: AES CTR mode by8 optimization enabled
Jan 20 18:23:04 localhost kernel: sched_clock: Marking stable (1164004892, 146936175)->(1450214906, -139273839)
Jan 20 18:23:04 localhost kernel: registered taskstats version 1
Jan 20 18:23:04 localhost kernel: Loading compiled-in X.509 certificates
Jan 20 18:23:04 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 18:23:04 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 20 18:23:04 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 20 18:23:04 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 20 18:23:04 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 20 18:23:04 localhost kernel: Demotion targets for Node 0: null
Jan 20 18:23:04 localhost kernel: page_owner is disabled
Jan 20 18:23:04 localhost kernel: Key type .fscrypt registered
Jan 20 18:23:04 localhost kernel: Key type fscrypt-provisioning registered
Jan 20 18:23:04 localhost kernel: Key type big_key registered
Jan 20 18:23:04 localhost kernel: Key type encrypted registered
Jan 20 18:23:04 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 20 18:23:04 localhost kernel: Loading compiled-in module X.509 certificates
Jan 20 18:23:04 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 04453f216699002fd63185eeab832de990bee6d7'
Jan 20 18:23:04 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 20 18:23:04 localhost kernel: ima: No architecture policies found
Jan 20 18:23:04 localhost kernel: evm: Initialising EVM extended attributes:
Jan 20 18:23:04 localhost kernel: evm: security.selinux
Jan 20 18:23:04 localhost kernel: evm: security.SMACK64 (disabled)
Jan 20 18:23:04 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 20 18:23:04 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 20 18:23:04 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 20 18:23:04 localhost kernel: evm: security.apparmor (disabled)
Jan 20 18:23:04 localhost kernel: evm: security.ima
Jan 20 18:23:04 localhost kernel: evm: security.capability
Jan 20 18:23:04 localhost kernel: evm: HMAC attrs: 0x1
Jan 20 18:23:04 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 20 18:23:04 localhost kernel: Running certificate verification RSA selftest
Jan 20 18:23:04 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 20 18:23:04 localhost kernel: Running certificate verification ECDSA selftest
Jan 20 18:23:04 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 20 18:23:04 localhost kernel: clk: Disabling unused clocks
Jan 20 18:23:04 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 20 18:23:04 localhost kernel: Freeing unused kernel image (initmem) memory: 4200K
Jan 20 18:23:04 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 20 18:23:04 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Jan 20 18:23:04 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 20 18:23:04 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 20 18:23:04 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 20 18:23:04 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 20 18:23:04 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 20 18:23:04 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 20 18:23:04 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 20 18:23:04 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 20 18:23:04 localhost kernel: Run /init as init process
Jan 20 18:23:04 localhost kernel:   with arguments:
Jan 20 18:23:04 localhost kernel:     /init
Jan 20 18:23:04 localhost kernel:   with environment:
Jan 20 18:23:04 localhost kernel:     HOME=/
Jan 20 18:23:04 localhost kernel:     TERM=linux
Jan 20 18:23:04 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64
Jan 20 18:23:04 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 18:23:04 localhost systemd[1]: Detected virtualization kvm.
Jan 20 18:23:04 localhost systemd[1]: Detected architecture x86-64.
Jan 20 18:23:04 localhost systemd[1]: Running in initrd.
Jan 20 18:23:04 localhost systemd[1]: No hostname configured, using default hostname.
Jan 20 18:23:04 localhost systemd[1]: Hostname set to <localhost>.
Jan 20 18:23:04 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 20 18:23:04 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 20 18:23:04 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 18:23:04 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 20 18:23:04 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 20 18:23:04 localhost systemd[1]: Reached target Local File Systems.
Jan 20 18:23:04 localhost systemd[1]: Reached target Path Units.
Jan 20 18:23:04 localhost systemd[1]: Reached target Slice Units.
Jan 20 18:23:04 localhost systemd[1]: Reached target Swaps.
Jan 20 18:23:04 localhost systemd[1]: Reached target Timer Units.
Jan 20 18:23:04 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 18:23:04 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 20 18:23:04 localhost systemd[1]: Listening on Journal Socket.
Jan 20 18:23:04 localhost systemd[1]: Listening on udev Control Socket.
Jan 20 18:23:04 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 20 18:23:04 localhost systemd[1]: Reached target Socket Units.
Jan 20 18:23:04 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 20 18:23:04 localhost systemd[1]: Starting Journal Service...
Jan 20 18:23:04 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 18:23:04 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 20 18:23:04 localhost systemd[1]: Starting Create System Users...
Jan 20 18:23:04 localhost systemd[1]: Starting Setup Virtual Console...
Jan 20 18:23:04 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 18:23:04 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 20 18:23:04 localhost systemd-journald[309]: Journal started
Jan 20 18:23:04 localhost systemd-journald[309]: Runtime Journal (/run/log/journal/6fed1acbe03a42468d491248ad1fe57b) is 8.0M, max 153.6M, 145.6M free.
Jan 20 18:23:04 localhost systemd-sysusers[313]: Creating group 'users' with GID 100.
Jan 20 18:23:04 localhost systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Jan 20 18:23:04 localhost systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 20 18:23:04 localhost systemd[1]: Started Journal Service.
Jan 20 18:23:04 localhost systemd[1]: Finished Create System Users.
Jan 20 18:23:04 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 18:23:04 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 18:23:04 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 18:23:04 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 18:23:04 localhost systemd[1]: Finished Setup Virtual Console.
Jan 20 18:23:04 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 20 18:23:04 localhost systemd[1]: Starting dracut cmdline hook...
Jan 20 18:23:04 localhost dracut-cmdline[328]: dracut-9 dracut-057-102.git20250818.el9
Jan 20 18:23:04 localhost dracut-cmdline[328]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-661.el9.x86_64 root=UUID=22ac9141-3960-4912-b20e-19fc8a328d40 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 20 18:23:04 localhost systemd[1]: Finished dracut cmdline hook.
Jan 20 18:23:04 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 20 18:23:04 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 20 18:23:04 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 20 18:23:04 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 20 18:23:04 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 20 18:23:04 localhost kernel: RPC: Registered udp transport module.
Jan 20 18:23:04 localhost kernel: RPC: Registered tcp transport module.
Jan 20 18:23:04 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 20 18:23:04 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 20 18:23:04 localhost rpc.statd[443]: Version 2.5.4 starting
Jan 20 18:23:05 localhost rpc.statd[443]: Initializing NSM state
Jan 20 18:23:05 localhost rpc.idmapd[448]: Setting log level to 0
Jan 20 18:23:05 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 20 18:23:05 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 18:23:05 localhost systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 18:23:05 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 18:23:05 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 20 18:23:05 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 20 18:23:05 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 20 18:23:05 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 20 18:23:05 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 18:23:05 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 20 18:23:05 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 18:23:05 localhost systemd[1]: Reached target Network.
Jan 20 18:23:05 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 20 18:23:05 localhost systemd[1]: Starting dracut initqueue hook...
Jan 20 18:23:05 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 18:23:05 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 18:23:05 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 20 18:23:05 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 20 18:23:05 localhost systemd[1]: Reached target System Initialization.
Jan 20 18:23:05 localhost systemd[1]: Reached target Basic System.
Jan 20 18:23:05 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 20 18:23:05 localhost kernel: libata version 3.00 loaded.
Jan 20 18:23:05 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 20 18:23:05 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 20 18:23:05 localhost kernel: scsi host0: ata_piix
Jan 20 18:23:05 localhost kernel: scsi host1: ata_piix
Jan 20 18:23:05 localhost kernel:  vda: vda1
Jan 20 18:23:05 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 20 18:23:05 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 20 18:23:05 localhost kernel: ata1: found unknown device (class 0)
Jan 20 18:23:05 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 20 18:23:05 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 20 18:23:05 localhost systemd-udevd[482]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:23:05 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 20 18:23:05 localhost systemd[1]: Found device /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 18:23:05 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 20 18:23:05 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 20 18:23:05 localhost systemd[1]: Reached target Initrd Root Device.
Jan 20 18:23:05 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 20 18:23:05 localhost systemd[1]: Finished dracut initqueue hook.
Jan 20 18:23:05 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 18:23:05 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 20 18:23:05 localhost systemd[1]: Reached target Remote File Systems.
Jan 20 18:23:05 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 20 18:23:05 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 20 18:23:05 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40...
Jan 20 18:23:05 localhost systemd-fsck[553]: /usr/sbin/fsck.xfs: XFS file system.
Jan 20 18:23:05 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40.
Jan 20 18:23:05 localhost systemd[1]: Mounting /sysroot...
Jan 20 18:23:06 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 20 18:23:06 localhost kernel: XFS (vda1): Mounting V5 Filesystem 22ac9141-3960-4912-b20e-19fc8a328d40
Jan 20 18:23:06 localhost kernel: XFS (vda1): Ending clean mount
Jan 20 18:23:06 localhost systemd[1]: Mounted /sysroot.
Jan 20 18:23:06 localhost systemd[1]: Reached target Initrd Root File System.
Jan 20 18:23:06 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 20 18:23:06 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 20 18:23:06 localhost systemd[1]: Reached target Initrd File Systems.
Jan 20 18:23:06 localhost systemd[1]: Reached target Initrd Default Target.
Jan 20 18:23:06 localhost systemd[1]: Starting dracut mount hook...
Jan 20 18:23:06 localhost systemd[1]: Finished dracut mount hook.
Jan 20 18:23:06 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 20 18:23:06 localhost rpc.idmapd[448]: exiting on signal 15
Jan 20 18:23:06 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 20 18:23:06 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 20 18:23:06 localhost systemd[1]: Stopped target Network.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Timer Units.
Jan 20 18:23:06 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 20 18:23:06 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Basic System.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Path Units.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Remote File Systems.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Slice Units.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Socket Units.
Jan 20 18:23:06 localhost systemd[1]: Stopped target System Initialization.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Local File Systems.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Swaps.
Jan 20 18:23:06 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped dracut mount hook.
Jan 20 18:23:06 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 20 18:23:06 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 20 18:23:06 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 20 18:23:06 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 20 18:23:06 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 20 18:23:06 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 20 18:23:06 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 20 18:23:06 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 20 18:23:06 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 20 18:23:06 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 20 18:23:06 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 20 18:23:06 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 20 18:23:06 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Closed udev Control Socket.
Jan 20 18:23:06 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Closed udev Kernel Socket.
Jan 20 18:23:06 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 20 18:23:06 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 20 18:23:06 localhost systemd[1]: Starting Cleanup udev Database...
Jan 20 18:23:06 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 20 18:23:06 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 20 18:23:06 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Stopped Create System Users.
Jan 20 18:23:06 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 20 18:23:06 localhost systemd[1]: Finished Cleanup udev Database.
Jan 20 18:23:06 localhost systemd[1]: Reached target Switch Root.
Jan 20 18:23:06 localhost systemd[1]: Starting Switch Root...
Jan 20 18:23:06 localhost systemd[1]: Switching root.
Jan 20 18:23:06 localhost systemd-journald[309]: Journal stopped
Jan 20 18:23:07 localhost systemd-journald[309]: Received SIGTERM from PID 1 (systemd).
Jan 20 18:23:07 localhost kernel: audit: type=1404 audit(1768933386.864:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 20 18:23:07 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:23:07 localhost kernel: SELinux:  policy capability open_perms=1
Jan 20 18:23:07 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:23:07 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:23:07 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:23:07 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:23:07 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:23:07 localhost kernel: audit: type=1403 audit(1768933387.001:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 20 18:23:07 localhost systemd[1]: Successfully loaded SELinux policy in 140.105ms.
Jan 20 18:23:07 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.306ms.
Jan 20 18:23:07 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 20 18:23:07 localhost systemd[1]: Detected virtualization kvm.
Jan 20 18:23:07 localhost systemd[1]: Detected architecture x86-64.
Jan 20 18:23:07 localhost systemd-rc-local-generator[638]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:23:07 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 20 18:23:07 localhost systemd[1]: Stopped Switch Root.
Jan 20 18:23:07 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 20 18:23:07 localhost systemd[1]: Created slice Slice /system/getty.
Jan 20 18:23:07 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 20 18:23:07 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 20 18:23:07 localhost systemd[1]: Created slice User and Session Slice.
Jan 20 18:23:07 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 20 18:23:07 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 20 18:23:07 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 20 18:23:07 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 20 18:23:07 localhost systemd[1]: Stopped target Switch Root.
Jan 20 18:23:07 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 20 18:23:07 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 20 18:23:07 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 20 18:23:07 localhost systemd[1]: Reached target Path Units.
Jan 20 18:23:07 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 20 18:23:07 localhost systemd[1]: Reached target Slice Units.
Jan 20 18:23:07 localhost systemd[1]: Reached target Swaps.
Jan 20 18:23:07 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 20 18:23:07 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 20 18:23:07 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 20 18:23:07 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 20 18:23:07 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 20 18:23:07 localhost systemd[1]: Listening on udev Control Socket.
Jan 20 18:23:07 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 20 18:23:07 localhost systemd[1]: Mounting Huge Pages File System...
Jan 20 18:23:07 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 20 18:23:07 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 20 18:23:07 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 20 18:23:07 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 18:23:07 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 20 18:23:07 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 18:23:07 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 20 18:23:07 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 20 18:23:07 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 20 18:23:07 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 20 18:23:07 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 20 18:23:07 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 20 18:23:07 localhost systemd[1]: Stopped Journal Service.
Jan 20 18:23:07 localhost systemd[1]: Starting Journal Service...
Jan 20 18:23:07 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 20 18:23:07 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 20 18:23:07 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 18:23:07 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 20 18:23:07 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 20 18:23:07 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 20 18:23:07 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 20 18:23:07 localhost kernel: fuse: init (API version 7.37)
Jan 20 18:23:07 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 20 18:23:07 localhost systemd[1]: Mounted Huge Pages File System.
Jan 20 18:23:07 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 20 18:23:07 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 20 18:23:07 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 20 18:23:07 localhost systemd-journald[679]: Journal started
Jan 20 18:23:07 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 18:23:07 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 20 18:23:07 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 20 18:23:07 localhost systemd[1]: Started Journal Service.
Jan 20 18:23:07 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 20 18:23:07 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 18:23:07 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 18:23:07 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 20 18:23:07 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 20 18:23:07 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 20 18:23:07 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 20 18:23:07 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 20 18:23:07 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 20 18:23:07 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 20 18:23:07 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 20 18:23:07 localhost kernel: ACPI: bus type drm_connector registered
Jan 20 18:23:07 localhost systemd[1]: Mounting FUSE Control File System...
Jan 20 18:23:07 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 18:23:07 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 20 18:23:07 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 20 18:23:07 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 20 18:23:07 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 20 18:23:07 localhost systemd[1]: Starting Create System Users...
Jan 20 18:23:07 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 20 18:23:07 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 20 18:23:07 localhost systemd-journald[679]: Runtime Journal (/run/log/journal/85ac68c10a6e7ae08ceb898dbdca0cb5) is 8.0M, max 153.6M, 145.6M free.
Jan 20 18:23:07 localhost systemd-journald[679]: Received client request to flush runtime journal.
Jan 20 18:23:07 localhost systemd[1]: Mounted FUSE Control File System.
Jan 20 18:23:07 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 20 18:23:07 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 20 18:23:07 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 20 18:23:07 localhost systemd[1]: Finished Create System Users.
Jan 20 18:23:07 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 20 18:23:07 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 20 18:23:07 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 20 18:23:07 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 20 18:23:07 localhost systemd[1]: Reached target Local File Systems.
Jan 20 18:23:07 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 20 18:23:07 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 20 18:23:07 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 20 18:23:07 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 20 18:23:07 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 20 18:23:07 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 20 18:23:07 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 20 18:23:07 localhost bootctl[696]: Couldn't find EFI system partition, skipping.
Jan 20 18:23:07 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 20 18:23:07 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 20 18:23:07 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 20 18:23:07 localhost systemd[1]: Starting Security Auditing Service...
Jan 20 18:23:07 localhost systemd[1]: Starting RPC Bind...
Jan 20 18:23:07 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 20 18:23:07 localhost auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 20 18:23:07 localhost auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 20 18:23:07 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 20 18:23:07 localhost systemd[1]: Started RPC Bind.
Jan 20 18:23:07 localhost augenrules[707]: /sbin/augenrules: No change
Jan 20 18:23:07 localhost augenrules[722]: No rules
Jan 20 18:23:07 localhost augenrules[722]: enabled 1
Jan 20 18:23:07 localhost augenrules[722]: failure 1
Jan 20 18:23:07 localhost augenrules[722]: pid 702
Jan 20 18:23:07 localhost augenrules[722]: rate_limit 0
Jan 20 18:23:07 localhost augenrules[722]: backlog_limit 8192
Jan 20 18:23:07 localhost augenrules[722]: lost 0
Jan 20 18:23:07 localhost augenrules[722]: backlog 2
Jan 20 18:23:07 localhost augenrules[722]: backlog_wait_time 60000
Jan 20 18:23:07 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 20 18:23:07 localhost augenrules[722]: enabled 1
Jan 20 18:23:07 localhost augenrules[722]: failure 1
Jan 20 18:23:07 localhost augenrules[722]: pid 702
Jan 20 18:23:07 localhost augenrules[722]: rate_limit 0
Jan 20 18:23:07 localhost augenrules[722]: backlog_limit 8192
Jan 20 18:23:07 localhost augenrules[722]: lost 0
Jan 20 18:23:07 localhost augenrules[722]: backlog 0
Jan 20 18:23:07 localhost augenrules[722]: backlog_wait_time 60000
Jan 20 18:23:07 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 20 18:23:07 localhost augenrules[722]: enabled 1
Jan 20 18:23:07 localhost augenrules[722]: failure 1
Jan 20 18:23:07 localhost augenrules[722]: pid 702
Jan 20 18:23:07 localhost augenrules[722]: rate_limit 0
Jan 20 18:23:07 localhost augenrules[722]: backlog_limit 8192
Jan 20 18:23:07 localhost augenrules[722]: lost 0
Jan 20 18:23:07 localhost augenrules[722]: backlog 0
Jan 20 18:23:07 localhost augenrules[722]: backlog_wait_time 60000
Jan 20 18:23:07 localhost augenrules[722]: backlog_wait_time_actual 0
Jan 20 18:23:07 localhost systemd[1]: Started Security Auditing Service.
Jan 20 18:23:07 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 20 18:23:07 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 20 18:23:08 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 20 18:23:08 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 20 18:23:08 localhost systemd[1]: Starting Update is Completed...
Jan 20 18:23:08 localhost systemd[1]: Finished Update is Completed.
Jan 20 18:23:08 localhost systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Jan 20 18:23:08 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 20 18:23:08 localhost systemd[1]: Reached target System Initialization.
Jan 20 18:23:08 localhost systemd[1]: Started dnf makecache --timer.
Jan 20 18:23:08 localhost systemd[1]: Started Daily rotation of log files.
Jan 20 18:23:08 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 20 18:23:08 localhost systemd[1]: Reached target Timer Units.
Jan 20 18:23:08 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 20 18:23:08 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 20 18:23:08 localhost systemd[1]: Reached target Socket Units.
Jan 20 18:23:08 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 20 18:23:08 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 18:23:08 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 20 18:23:08 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 20 18:23:08 localhost systemd-udevd[732]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:23:08 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 20 18:23:08 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 20 18:23:08 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 20 18:23:08 localhost systemd[1]: Reached target Basic System.
Jan 20 18:23:08 localhost dbus-broker-lau[760]: Ready
Jan 20 18:23:08 localhost systemd[1]: Starting NTP client/server...
Jan 20 18:23:08 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 20 18:23:08 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 20 18:23:08 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 20 18:23:08 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 20 18:23:08 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 20 18:23:08 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 20 18:23:08 localhost chronyd[784]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 18:23:08 localhost chronyd[784]: Loaded 0 symmetric keys
Jan 20 18:23:08 localhost chronyd[784]: Using right/UTC timezone to obtain leap second data
Jan 20 18:23:08 localhost chronyd[784]: Loaded seccomp filter (level 2)
Jan 20 18:23:08 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 20 18:23:08 localhost systemd[1]: Started irqbalance daemon.
Jan 20 18:23:08 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 20 18:23:08 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 18:23:08 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 18:23:08 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 18:23:08 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 20 18:23:08 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 20 18:23:08 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 20 18:23:08 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 20 18:23:08 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 20 18:23:08 localhost kernel: Console: switching to colour dummy device 80x25
Jan 20 18:23:08 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 20 18:23:08 localhost kernel: [drm] features: -context_init
Jan 20 18:23:08 localhost kernel: [drm] number of scanouts: 1
Jan 20 18:23:08 localhost kernel: [drm] number of cap sets: 0
Jan 20 18:23:08 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 20 18:23:08 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 20 18:23:08 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 20 18:23:08 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 20 18:23:08 localhost systemd[1]: Starting User Login Management...
Jan 20 18:23:08 localhost kernel: kvm_amd: TSC scaling supported
Jan 20 18:23:08 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 20 18:23:08 localhost kernel: kvm_amd: Nested Paging enabled
Jan 20 18:23:08 localhost kernel: kvm_amd: LBR virtualization supported
Jan 20 18:23:08 localhost systemd[1]: Started NTP client/server.
Jan 20 18:23:08 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 20 18:23:08 localhost systemd-logind[797]: New seat seat0.
Jan 20 18:23:08 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 20 18:23:08 localhost systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 18:23:08 localhost systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 18:23:08 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 20 18:23:08 localhost systemd[1]: Started User Login Management.
Jan 20 18:23:08 localhost iptables.init[787]: iptables: Applying firewall rules: [  OK  ]
Jan 20 18:23:08 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 20 18:23:08 localhost cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Tue, 20 Jan 2026 18:23:08 +0000. Up 6.47 seconds.
Jan 20 18:23:09 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 20 18:23:09 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 20 18:23:09 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpgem3t85w.mount: Deactivated successfully.
Jan 20 18:23:09 localhost systemd[1]: Starting Hostname Service...
Jan 20 18:23:09 localhost systemd[1]: Started Hostname Service.
Jan 20 18:23:09 np0005589310.novalocal systemd-hostnamed[854]: Hostname set to <np0005589310.novalocal> (static)
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Reached target Preparation for Network.
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Starting Network Manager...
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.6794] NetworkManager (version 1.54.3-2.el9) is starting... (boot:67fc3c9d-8ab5-4c8d-ad06-0b5b4ad77266)
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.6800] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.6887] manager[0x560cf1df7000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.6927] hostname: hostname: using hostnamed
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.6927] hostname: static hostname changed from (none) to "np0005589310.novalocal"
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.6933] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7108] manager[0x560cf1df7000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7109] manager[0x560cf1df7000]: rfkill: WWAN hardware radio set enabled
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7155] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7155] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7156] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7157] manager: Networking is enabled by state file
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7159] settings: Loaded settings plugin: keyfile (internal)
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7169] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7193] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7205] dhcp: init: Using DHCP client 'internal'
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7207] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7220] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7228] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7237] device (lo): Activation: starting connection 'lo' (9dbcb845-48af-44e7-aac2-9b1c27d04ec3)
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7247] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7251] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7283] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7286] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7288] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7293] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7295] device (eth0): carrier: link connected
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7298] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7303] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7315] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7320] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7320] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7323] manager: NetworkManager state is now CONNECTING
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7324] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7331] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7335] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Started Network Manager.
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Reached target Network.
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7594] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7597] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 18:23:09 np0005589310.novalocal NetworkManager[858]: <info>  [1768933389.7603] device (lo): Activation: successful, device activated.
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Reached target NFS client services.
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: Reached target Remote File Systems.
Jan 20 18:23:09 np0005589310.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1452] dhcp4 (eth0): state changed new lease, address=38.102.83.210
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1464] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1486] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1538] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1539] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1541] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1543] device (eth0): Activation: successful, device activated.
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1547] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 18:23:10 np0005589310.novalocal NetworkManager[858]: <info>  [1768933390.1549] manager: startup complete
Jan 20 18:23:10 np0005589310.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 20 18:23:10 np0005589310.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: Cloud-init v. 24.4-8.el9 running 'init' at Tue, 20 Jan 2026 18:23:10 +0000. Up 8.10 seconds.
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.210         | 255.255.255.0 | global | fa:16:3e:cb:85:96 |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fecb:8596/64 |       .       |  link  | fa:16:3e:cb:85:96 |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 20 18:23:10 np0005589310.novalocal cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 20 18:23:11 np0005589310.novalocal useradd[989]: new group: name=cloud-user, GID=1001
Jan 20 18:23:11 np0005589310.novalocal useradd[989]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 20 18:23:11 np0005589310.novalocal useradd[989]: add 'cloud-user' to group 'adm'
Jan 20 18:23:11 np0005589310.novalocal useradd[989]: add 'cloud-user' to group 'systemd-journal'
Jan 20 18:23:11 np0005589310.novalocal useradd[989]: add 'cloud-user' to shadow group 'adm'
Jan 20 18:23:11 np0005589310.novalocal useradd[989]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Generating public/private rsa key pair.
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: The key fingerprint is:
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: SHA256:xq/BKHwS7OYSbvQhm5GNkVJ1N+8ccXAte0mKmDWDeIE root@np0005589310.novalocal
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: The key's randomart image is:
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: +---[RSA 3072]----+
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |   .. .o+oo.o.   |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |  .  .E.oo++. o  |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: | . .   . +o+ = . |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |. o.   .oo..o o  |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: | . =o   S o  .   |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |  Boo. + .       |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: | o B*.o o .      |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |  *o.+   o       |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: | . ..   .        |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Generating public/private ecdsa key pair.
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: The key fingerprint is:
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: SHA256:Pi/w76stpzFlcqqESeZJim8TbgmEFsOJOR+R+Xn/+EQ root@np0005589310.novalocal
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: The key's randomart image is:
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: +---[ECDSA 256]---+
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |o.o+             |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |+=+              |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |.ooo .           |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |.o. o .          |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |o   +. .S E      |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |...* +...B       |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |.o.o= .o*o.      |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: | .*  . .=Bo      |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: | o..  . oXB.     |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Generating public/private ed25519 key pair.
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: The key fingerprint is:
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: SHA256:T2RZBGG+0a4+EG4dEp0XIljSlRrvCSsBJEPBHiJL+Gw root@np0005589310.novalocal
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: The key's randomart image is:
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: +--[ED25519 256]--+
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |.o=o. .+oo**+.   |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |+.oo. ..+++=.    |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |o* . .   =*..    |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |. E   . =oo+     |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: | .     oSBoo.    |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |      . =o+.     |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |       o .o      |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |         ..      |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: |          ..     |
Jan 20 18:23:12 np0005589310.novalocal cloud-init[921]: +----[SHA256]-----+
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Reached target Network is Online.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Starting System Logging Service...
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 20 18:23:12 np0005589310.novalocal sm-notify[1006]: Version 2.5.4 starting
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Starting Permit User Sessions...
Jan 20 18:23:12 np0005589310.novalocal sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 20 18:23:12 np0005589310.novalocal sshd[1008]: Server listening on :: port 22.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Finished Permit User Sessions.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Started Command Scheduler.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Started Getty on tty1.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Reached target Login Prompts.
Jan 20 18:23:12 np0005589310.novalocal crond[1011]: (CRON) STARTUP (1.5.7)
Jan 20 18:23:12 np0005589310.novalocal crond[1011]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 20 18:23:12 np0005589310.novalocal crond[1011]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 44% if used.)
Jan 20 18:23:12 np0005589310.novalocal crond[1011]: (CRON) INFO (running with inotify support)
Jan 20 18:23:12 np0005589310.novalocal rsyslogd[1007]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1007" x-info="https://www.rsyslog.com"] start
Jan 20 18:23:12 np0005589310.novalocal rsyslogd[1007]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Started System Logging Service.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Reached target Multi-User System.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 20 18:23:12 np0005589310.novalocal rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:23:12 np0005589310.novalocal kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Jan 20 18:23:12 np0005589310.novalocal kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-661.el9.x86_64kdump.img
Jan 20 18:23:12 np0005589310.novalocal cloud-init[1148]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Tue, 20 Jan 2026 18:23:12 +0000. Up 10.15 seconds.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 20 18:23:12 np0005589310.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 20 18:23:12 np0005589310.novalocal dracut[1267]: dracut-057-102.git20250818.el9
Jan 20 18:23:12 np0005589310.novalocal cloud-init[1285]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Tue, 20 Jan 2026 18:23:12 +0000. Up 10.57 seconds.
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/22ac9141-3960-4912-b20e-19fc8a328d40 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-661.el9.x86_64kdump.img 5.14.0-661.el9.x86_64
Jan 20 18:23:13 np0005589310.novalocal cloud-init[1302]: #############################################################
Jan 20 18:23:13 np0005589310.novalocal cloud-init[1305]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 20 18:23:13 np0005589310.novalocal cloud-init[1310]: 256 SHA256:Pi/w76stpzFlcqqESeZJim8TbgmEFsOJOR+R+Xn/+EQ root@np0005589310.novalocal (ECDSA)
Jan 20 18:23:13 np0005589310.novalocal cloud-init[1314]: 256 SHA256:T2RZBGG+0a4+EG4dEp0XIljSlRrvCSsBJEPBHiJL+Gw root@np0005589310.novalocal (ED25519)
Jan 20 18:23:13 np0005589310.novalocal cloud-init[1321]: 3072 SHA256:xq/BKHwS7OYSbvQhm5GNkVJ1N+8ccXAte0mKmDWDeIE root@np0005589310.novalocal (RSA)
Jan 20 18:23:13 np0005589310.novalocal cloud-init[1323]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 20 18:23:13 np0005589310.novalocal cloud-init[1324]: #############################################################
Jan 20 18:23:13 np0005589310.novalocal cloud-init[1285]: Cloud-init v. 24.4-8.el9 finished at Tue, 20 Jan 2026 18:23:13 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.77 seconds
Jan 20 18:23:13 np0005589310.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 20 18:23:13 np0005589310.novalocal systemd[1]: Reached target Cloud-init target.
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 18:23:13 np0005589310.novalocal dracut[1269]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 20 18:23:13 np0005589310.novalocal sshd-session[1612]: Connection reset by 38.102.83.114 port 37224 [preauth]
Jan 20 18:23:13 np0005589310.novalocal sshd-session[1629]: Unable to negotiate with 38.102.83.114 port 37240: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 20 18:23:14 np0005589310.novalocal sshd-session[1637]: Connection reset by 38.102.83.114 port 37242 [preauth]
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 20 18:23:14 np0005589310.novalocal sshd-session[1651]: Unable to negotiate with 38.102.83.114 port 37256: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 20 18:23:14 np0005589310.novalocal sshd-session[1659]: Unable to negotiate with 38.102.83.114 port 37264: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 20 18:23:14 np0005589310.novalocal sshd-session[1664]: Connection reset by 38.102.83.114 port 37272 [preauth]
Jan 20 18:23:14 np0005589310.novalocal sshd-session[1682]: Connection closed by 38.102.83.114 port 37280 [preauth]
Jan 20 18:23:14 np0005589310.novalocal sshd-session[1695]: Unable to negotiate with 38.102.83.114 port 37290: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 20 18:23:14 np0005589310.novalocal sshd-session[1705]: Unable to negotiate with 38.102.83.114 port 37292: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: memstrack is not available
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: memstrack is not available
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 20 18:23:14 np0005589310.novalocal dracut[1269]: *** Including module: systemd ***
Jan 20 18:23:15 np0005589310.novalocal dracut[1269]: *** Including module: fips ***
Jan 20 18:23:15 np0005589310.novalocal chronyd[784]: Selected source 23.159.16.194 (2.centos.pool.ntp.org)
Jan 20 18:23:15 np0005589310.novalocal chronyd[784]: System clock TAI offset set to 37 seconds
Jan 20 18:23:15 np0005589310.novalocal dracut[1269]: *** Including module: systemd-initrd ***
Jan 20 18:23:15 np0005589310.novalocal dracut[1269]: *** Including module: i18n ***
Jan 20 18:23:15 np0005589310.novalocal dracut[1269]: *** Including module: drm ***
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]: *** Including module: prefixdevname ***
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]: *** Including module: kernel-modules ***
Jan 20 18:23:16 np0005589310.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]: *** Including module: kernel-modules-extra ***
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]: *** Including module: qemu ***
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]: *** Including module: fstab-sys ***
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]: *** Including module: rootfs-block ***
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]: *** Including module: terminfo ***
Jan 20 18:23:16 np0005589310.novalocal dracut[1269]: *** Including module: udev-rules ***
Jan 20 18:23:17 np0005589310.novalocal dracut[1269]: Skipping udev rule: 91-permissions.rules
Jan 20 18:23:17 np0005589310.novalocal dracut[1269]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 20 18:23:17 np0005589310.novalocal dracut[1269]: *** Including module: virtiofs ***
Jan 20 18:23:17 np0005589310.novalocal dracut[1269]: *** Including module: dracut-systemd ***
Jan 20 18:23:17 np0005589310.novalocal dracut[1269]: *** Including module: usrmount ***
Jan 20 18:23:17 np0005589310.novalocal dracut[1269]: *** Including module: base ***
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]: *** Including module: fs-lib ***
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]: *** Including module: kdumpbase ***
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: IRQ 25 affinity is now unmanaged
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: IRQ 31 affinity is now unmanaged
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: IRQ 28 affinity is now unmanaged
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: IRQ 32 affinity is now unmanaged
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: IRQ 30 affinity is now unmanaged
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 20 18:23:18 np0005589310.novalocal irqbalance[789]: IRQ 29 affinity is now unmanaged
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:   microcode_ctl module: mangling fw_dir
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 20 18:23:18 np0005589310.novalocal dracut[1269]: *** Including module: openssl ***
Jan 20 18:23:19 np0005589310.novalocal dracut[1269]: *** Including module: shutdown ***
Jan 20 18:23:19 np0005589310.novalocal dracut[1269]: *** Including module: squash ***
Jan 20 18:23:19 np0005589310.novalocal dracut[1269]: *** Including modules done ***
Jan 20 18:23:19 np0005589310.novalocal dracut[1269]: *** Installing kernel module dependencies ***
Jan 20 18:23:19 np0005589310.novalocal dracut[1269]: *** Installing kernel module dependencies done ***
Jan 20 18:23:19 np0005589310.novalocal dracut[1269]: *** Resolving executable dependencies ***
Jan 20 18:23:20 np0005589310.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:23:21 np0005589310.novalocal dracut[1269]: *** Resolving executable dependencies done ***
Jan 20 18:23:21 np0005589310.novalocal dracut[1269]: *** Generating early-microcode cpio image ***
Jan 20 18:23:21 np0005589310.novalocal dracut[1269]: *** Store current command line parameters ***
Jan 20 18:23:21 np0005589310.novalocal dracut[1269]: Stored kernel commandline:
Jan 20 18:23:21 np0005589310.novalocal dracut[1269]: No dracut internal kernel commandline stored in the initramfs
Jan 20 18:23:21 np0005589310.novalocal dracut[1269]: *** Install squash loader ***
Jan 20 18:23:22 np0005589310.novalocal dracut[1269]: *** Squashing the files inside the initramfs ***
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: *** Squashing the files inside the initramfs done ***
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: *** Creating image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' ***
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: *** Hardlinking files ***
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: Mode:           real
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: Files:          50
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: Linked:         0 files
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: Compared:       0 xattrs
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: Compared:       0 files
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: Saved:          0 B
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: Duration:       0.000587 seconds
Jan 20 18:23:23 np0005589310.novalocal dracut[1269]: *** Hardlinking files done ***
Jan 20 18:23:24 np0005589310.novalocal dracut[1269]: *** Creating initramfs image file '/boot/initramfs-5.14.0-661.el9.x86_64kdump.img' done ***
Jan 20 18:23:24 np0005589310.novalocal kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Jan 20 18:23:24 np0005589310.novalocal kdumpctl[1019]: kdump: Starting kdump: [OK]
Jan 20 18:23:24 np0005589310.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 20 18:23:24 np0005589310.novalocal systemd[1]: Startup finished in 1.607s (kernel) + 2.881s (initrd) + 17.981s (userspace) = 22.470s.
Jan 20 18:23:39 np0005589310.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 18:23:52 np0005589310.novalocal sshd-session[4306]: Invalid user sol from 45.148.10.240 port 46396
Jan 20 18:23:52 np0005589310.novalocal sshd-session[4306]: Connection closed by invalid user sol 45.148.10.240 port 46396 [preauth]
Jan 20 18:26:14 np0005589310.novalocal sshd-session[4308]: Invalid user sol from 45.148.10.240 port 53546
Jan 20 18:26:14 np0005589310.novalocal sshd-session[4308]: Connection closed by invalid user sol 45.148.10.240 port 53546 [preauth]
Jan 20 18:26:20 np0005589310.novalocal sshd-session[4310]: Accepted publickey for zuul from 38.102.83.114 port 49486 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 20 18:26:20 np0005589310.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 20 18:26:20 np0005589310.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 20 18:26:20 np0005589310.novalocal systemd-logind[797]: New session 1 of user zuul.
Jan 20 18:26:20 np0005589310.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 20 18:26:20 np0005589310.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Queued start job for default target Main User Target.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Created slice User Application Slice.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Reached target Paths.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Reached target Timers.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Starting D-Bus User Message Bus Socket...
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Starting Create User's Volatile Files and Directories...
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Listening on D-Bus User Message Bus Socket.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Reached target Sockets.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Finished Create User's Volatile Files and Directories.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Reached target Basic System.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Reached target Main User Target.
Jan 20 18:26:20 np0005589310.novalocal systemd[4314]: Startup finished in 233ms.
Jan 20 18:26:20 np0005589310.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 20 18:26:20 np0005589310.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 20 18:26:20 np0005589310.novalocal sshd-session[4310]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:26:21 np0005589310.novalocal python3[4397]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:26:23 np0005589310.novalocal python3[4425]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:26:29 np0005589310.novalocal python3[4483]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:26:30 np0005589310.novalocal python3[4523]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 20 18:26:33 np0005589310.novalocal python3[4549]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCh3Yi5Xd7DiYa1i0K0hEQRl3npfFeF2fqveQSvJ3+Qh32GCocOe6DbPFsG9H7BUHVEflWNJZdPPlCUM6C7xU61TwiHRqIfRKwDP1ZZZ0c9F1IEp4kgnp+KxBpgAFTpPr0g8DlLHgZvJCKpyLTjQm3nxxXkLT/AM0aER72bKzo+yElY3FC/T6Vlg4zUI5whCnrOdFi460EqOWARONWoFl4YQvpnXjL1oSiyy/AA2SLZMmu8pnl8mZAtlFs96/T6+MbAiycKiV9aiIWM74tzjY/FQ43abQCIFQ2LFjCzP+CKDzTQkhX+FFXDEpV9sFfE7T5L2IwqBGu8OmPOgXKyZRFUWYdJx+HWYiUq4j+8LRrEqLxB5fs/2Zn4CBcTKG1Qkoz2vcDiox/P0zVycwzQFSwMiqPxWGAsRhrGubvXvf4HCaBQzRjRp/0xWjKqqqYOhuK/ThW7fpEkuTvS7g1A+oJZNN7gIt2PgK45UOOSCD1xHtQLeR5HuNR7giWXHVKO7T0= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:33 np0005589310.novalocal python3[4573]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:34 np0005589310.novalocal python3[4672]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:26:34 np0005589310.novalocal python3[4743]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768933593.6811175-207-184448163679159/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=480e6c18599849fe9f94b7a2a9bafd87_id_rsa follow=False checksum=e533cfffdb60e29c3d9ad08b7280ab1612aed717 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:35 np0005589310.novalocal python3[4866]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:26:35 np0005589310.novalocal python3[4937]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768933594.7762268-240-14635357441409/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=480e6c18599849fe9f94b7a2a9bafd87_id_rsa.pub follow=False checksum=3b367e5376f0bd06906e6d88484065951910e849 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:36 np0005589310.novalocal python3[4985]: ansible-ping Invoked with data=pong
Jan 20 18:26:37 np0005589310.novalocal python3[5009]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:26:39 np0005589310.novalocal python3[5067]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 20 18:26:40 np0005589310.novalocal python3[5099]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:40 np0005589310.novalocal python3[5123]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:40 np0005589310.novalocal python3[5147]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:41 np0005589310.novalocal python3[5171]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:41 np0005589310.novalocal python3[5195]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:41 np0005589310.novalocal python3[5219]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:43 np0005589310.novalocal sudo[5243]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilfzhsdxzshracdjsdztrhiijjtgcyny ; /usr/bin/python3'
Jan 20 18:26:43 np0005589310.novalocal sudo[5243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:26:43 np0005589310.novalocal python3[5245]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:43 np0005589310.novalocal sudo[5243]: pam_unix(sudo:session): session closed for user root
Jan 20 18:26:44 np0005589310.novalocal sudo[5321]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvdwuifsurpqkpzroejrypirixhkadbj ; /usr/bin/python3'
Jan 20 18:26:44 np0005589310.novalocal sudo[5321]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:26:44 np0005589310.novalocal python3[5323]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:26:44 np0005589310.novalocal sudo[5321]: pam_unix(sudo:session): session closed for user root
Jan 20 18:26:44 np0005589310.novalocal sudo[5394]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khklljqtzqytvczlhfwivelghckvwekm ; /usr/bin/python3'
Jan 20 18:26:44 np0005589310.novalocal sudo[5394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:26:44 np0005589310.novalocal python3[5396]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1768933603.8398094-21-152098359614112/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:44 np0005589310.novalocal sudo[5394]: pam_unix(sudo:session): session closed for user root
Jan 20 18:26:45 np0005589310.novalocal python3[5444]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:45 np0005589310.novalocal python3[5468]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:45 np0005589310.novalocal python3[5492]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:46 np0005589310.novalocal python3[5516]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:46 np0005589310.novalocal python3[5540]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:46 np0005589310.novalocal python3[5564]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:47 np0005589310.novalocal python3[5588]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:47 np0005589310.novalocal python3[5612]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:47 np0005589310.novalocal python3[5636]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:47 np0005589310.novalocal python3[5660]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:48 np0005589310.novalocal python3[5684]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:48 np0005589310.novalocal irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Jan 20 18:26:48 np0005589310.novalocal irqbalance[789]: IRQ 26 affinity is now unmanaged
Jan 20 18:26:48 np0005589310.novalocal python3[5708]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:48 np0005589310.novalocal python3[5732]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:49 np0005589310.novalocal python3[5756]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:49 np0005589310.novalocal python3[5780]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:49 np0005589310.novalocal python3[5804]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:49 np0005589310.novalocal python3[5828]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:50 np0005589310.novalocal python3[5852]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:50 np0005589310.novalocal python3[5876]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:50 np0005589310.novalocal python3[5900]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:50 np0005589310.novalocal python3[5924]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:51 np0005589310.novalocal python3[5948]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:51 np0005589310.novalocal python3[5972]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:51 np0005589310.novalocal python3[5996]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:51 np0005589310.novalocal python3[6020]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:52 np0005589310.novalocal python3[6044]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:26:54 np0005589310.novalocal sudo[6068]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txrnxzlqbytwoywmizecreynjkvkfczn ; /usr/bin/python3'
Jan 20 18:26:54 np0005589310.novalocal sudo[6068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:26:55 np0005589310.novalocal python3[6070]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 18:26:55 np0005589310.novalocal systemd[1]: Starting Time & Date Service...
Jan 20 18:26:55 np0005589310.novalocal systemd[1]: Started Time & Date Service.
Jan 20 18:26:55 np0005589310.novalocal systemd-timedated[6072]: Changed time zone to 'UTC' (UTC).
Jan 20 18:26:55 np0005589310.novalocal sudo[6068]: pam_unix(sudo:session): session closed for user root
Jan 20 18:26:55 np0005589310.novalocal sudo[6099]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwasvtsqyspexsnlbalyqspfvugflfhp ; /usr/bin/python3'
Jan 20 18:26:55 np0005589310.novalocal sudo[6099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:26:55 np0005589310.novalocal python3[6101]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:55 np0005589310.novalocal sudo[6099]: pam_unix(sudo:session): session closed for user root
Jan 20 18:26:56 np0005589310.novalocal python3[6177]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:26:56 np0005589310.novalocal python3[6248]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1768933615.8860524-153-242223373032831/source _original_basename=tmp9mb_b4ir follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:57 np0005589310.novalocal python3[6348]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:26:57 np0005589310.novalocal python3[6419]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768933616.7911553-183-191478135456329/source _original_basename=tmpi8gsu5pn follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:58 np0005589310.novalocal sudo[6519]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufjipgsyoashcfxtrmvponhkiifvavrg ; /usr/bin/python3'
Jan 20 18:26:58 np0005589310.novalocal sudo[6519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:26:58 np0005589310.novalocal python3[6521]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:26:58 np0005589310.novalocal sudo[6519]: pam_unix(sudo:session): session closed for user root
Jan 20 18:26:58 np0005589310.novalocal sudo[6592]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gahqzljrishqfsfxkqdnhgfyvazemfjk ; /usr/bin/python3'
Jan 20 18:26:58 np0005589310.novalocal sudo[6592]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:26:58 np0005589310.novalocal python3[6594]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1768933617.8833928-231-99166232399149/source _original_basename=tmppbey079h follow=False checksum=675da38221554070fad736c9d717667e6ac7d120 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:26:58 np0005589310.novalocal sudo[6592]: pam_unix(sudo:session): session closed for user root
Jan 20 18:26:59 np0005589310.novalocal python3[6642]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:26:59 np0005589310.novalocal python3[6668]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:26:59 np0005589310.novalocal sudo[6746]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muoxpqljuuxselasixkgsywicedfzttn ; /usr/bin/python3'
Jan 20 18:26:59 np0005589310.novalocal sudo[6746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:26:59 np0005589310.novalocal python3[6748]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:26:59 np0005589310.novalocal sudo[6746]: pam_unix(sudo:session): session closed for user root
Jan 20 18:26:59 np0005589310.novalocal sudo[6819]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcckxmxprpxglbghnobedrrrvqnrfnrg ; /usr/bin/python3'
Jan 20 18:26:59 np0005589310.novalocal sudo[6819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:27:00 np0005589310.novalocal python3[6821]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1768933619.4859846-273-191573941791787/source _original_basename=tmp125ww3lk follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:27:00 np0005589310.novalocal sudo[6819]: pam_unix(sudo:session): session closed for user root
Jan 20 18:27:00 np0005589310.novalocal sudo[6870]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmtfooqbuithfiybwnfkcxwvgpalbbal ; /usr/bin/python3'
Jan 20 18:27:00 np0005589310.novalocal sudo[6870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:27:00 np0005589310.novalocal python3[6872]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ec2-ffbe-120e-581e-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:27:00 np0005589310.novalocal sudo[6870]: pam_unix(sudo:session): session closed for user root
Jan 20 18:27:01 np0005589310.novalocal python3[6900]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-120e-581e-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 20 18:27:02 np0005589310.novalocal python3[6928]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:27:20 np0005589310.novalocal sudo[6952]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuquxpgbhjwjjimrjtyouyzgdknvahwo ; /usr/bin/python3'
Jan 20 18:27:20 np0005589310.novalocal sudo[6952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:27:20 np0005589310.novalocal python3[6954]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:27:20 np0005589310.novalocal sudo[6952]: pam_unix(sudo:session): session closed for user root
Jan 20 18:27:25 np0005589310.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 20 18:27:59 np0005589310.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 20 18:27:59 np0005589310.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5258] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 18:27:59 np0005589310.novalocal systemd-udevd[6957]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5521] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5544] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5546] device (eth1): carrier: link connected
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5548] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5552] policy: auto-activating connection 'Wired connection 1' (fd33b000-20d4-3dcd-9e30-523cad9af7fa)
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5555] device (eth1): Activation: starting connection 'Wired connection 1' (fd33b000-20d4-3dcd-9e30-523cad9af7fa)
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5556] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5558] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5561] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:27:59 np0005589310.novalocal NetworkManager[858]: <info>  [1768933679.5565] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:28:00 np0005589310.novalocal python3[6984]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ec2-ffbe-feea-74cb-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:28:10 np0005589310.novalocal sudo[7062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgikpefbhmvfdfggadspjapyotbyfswr ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:28:10 np0005589310.novalocal sudo[7062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:28:10 np0005589310.novalocal python3[7064]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:28:10 np0005589310.novalocal sudo[7062]: pam_unix(sudo:session): session closed for user root
Jan 20 18:28:10 np0005589310.novalocal sudo[7135]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjauryyoxkwhfldkzzyakszdcnmhvsqs ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:28:10 np0005589310.novalocal sudo[7135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:28:10 np0005589310.novalocal python3[7137]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768933690.2825618-102-184637403101572/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=cd4a55093ad04d42dea8a9f1c133b61b367dadc0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:28:10 np0005589310.novalocal sudo[7135]: pam_unix(sudo:session): session closed for user root
Jan 20 18:28:11 np0005589310.novalocal sudo[7185]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liyhkhxglkpxorapastgdfgwhldrwzqp ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:28:11 np0005589310.novalocal sudo[7185]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:28:11 np0005589310.novalocal python3[7187]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[858]: <info>  [1768933691.7419] caught SIGTERM, shutting down normally.
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Stopping Network Manager...
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[858]: <info>  [1768933691.7427] dhcp4 (eth0): canceled DHCP transaction
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[858]: <info>  [1768933691.7428] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[858]: <info>  [1768933691.7428] dhcp4 (eth0): state changed no lease
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[858]: <info>  [1768933691.7430] manager: NetworkManager state is now CONNECTING
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[858]: <info>  [1768933691.7524] dhcp4 (eth1): canceled DHCP transaction
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[858]: <info>  [1768933691.7524] dhcp4 (eth1): state changed no lease
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[858]: <info>  [1768933691.7580] exiting (success)
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Stopped Network Manager.
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: NetworkManager.service: Consumed 1.970s CPU time, 10.0M memory peak.
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Starting Network Manager...
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.8320] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:67fc3c9d-8ab5-4c8d-ad06-0b5b4ad77266)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.8323] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.8385] manager[0x55602a111000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Starting Hostname Service...
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Started Hostname Service.
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9081] hostname: hostname: using hostnamed
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9081] hostname: static hostname changed from (none) to "np0005589310.novalocal"
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9088] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9093] manager[0x55602a111000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9094] manager[0x55602a111000]: rfkill: WWAN hardware radio set enabled
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9125] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9125] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9126] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9126] manager: Networking is enabled by state file
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9129] settings: Loaded settings plugin: keyfile (internal)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9132] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9161] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9171] dhcp: init: Using DHCP client 'internal'
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9174] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9179] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9185] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9193] device (lo): Activation: starting connection 'lo' (9dbcb845-48af-44e7-aac2-9b1c27d04ec3)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9200] device (eth0): carrier: link connected
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9205] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9210] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9211] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9217] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9224] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9231] device (eth1): carrier: link connected
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9236] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9242] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (fd33b000-20d4-3dcd-9e30-523cad9af7fa) (indicated)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9242] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9248] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9255] device (eth1): Activation: starting connection 'Wired connection 1' (fd33b000-20d4-3dcd-9e30-523cad9af7fa)
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Started Network Manager.
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9263] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9268] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9270] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9272] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9275] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9278] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9280] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9283] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9286] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9293] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9297] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9307] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9309] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9329] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9331] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9336] device (lo): Activation: successful, device activated.
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9359] dhcp4 (eth0): state changed new lease, address=38.102.83.210
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9367] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 18:28:11 np0005589310.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9445] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9466] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9468] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9472] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9475] device (eth0): Activation: successful, device activated.
Jan 20 18:28:11 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933691.9482] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 18:28:11 np0005589310.novalocal sudo[7185]: pam_unix(sudo:session): session closed for user root
Jan 20 18:28:12 np0005589310.novalocal python3[7271]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ec2-ffbe-feea-74cb-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:28:22 np0005589310.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:28:31 np0005589310.novalocal sshd-session[7274]: Invalid user sol from 45.148.10.240 port 47622
Jan 20 18:28:31 np0005589310.novalocal sshd-session[7274]: Connection closed by invalid user sol 45.148.10.240 port 47622 [preauth]
Jan 20 18:28:41 np0005589310.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 18:28:50 np0005589310.novalocal systemd[4314]: Starting Mark boot as successful...
Jan 20 18:28:50 np0005589310.novalocal systemd[4314]: Finished Mark boot as successful.
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.3738] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 18:28:57 np0005589310.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:28:57 np0005589310.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4053] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4057] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4068] device (eth1): Activation: successful, device activated.
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4079] manager: startup complete
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4083] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <warn>  [1768933737.4092] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4104] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 20 18:28:57 np0005589310.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4194] dhcp4 (eth1): canceled DHCP transaction
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4195] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4195] dhcp4 (eth1): state changed no lease
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4217] policy: auto-activating connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a)
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4226] device (eth1): Activation: starting connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a)
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4228] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4233] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4243] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4258] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4305] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4308] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:28:57 np0005589310.novalocal NetworkManager[7195]: <info>  [1768933737.4319] device (eth1): Activation: successful, device activated.
Jan 20 18:29:07 np0005589310.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:29:11 np0005589310.novalocal sshd-session[4324]: Received disconnect from 38.102.83.114 port 49486:11: disconnected by user
Jan 20 18:29:11 np0005589310.novalocal sshd-session[4324]: Disconnected from user zuul 38.102.83.114 port 49486
Jan 20 18:29:11 np0005589310.novalocal sshd-session[4310]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:29:11 np0005589310.novalocal systemd-logind[797]: Session 1 logged out. Waiting for processes to exit.
Jan 20 18:29:11 np0005589310.novalocal sshd-session[7304]: Accepted publickey for zuul from 38.102.83.114 port 56758 ssh2: RSA SHA256:NUQhMT8WFYQNoBbXELd3vtykrkPErLT7OjFC/UP50jg
Jan 20 18:29:11 np0005589310.novalocal systemd-logind[797]: New session 3 of user zuul.
Jan 20 18:29:11 np0005589310.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 20 18:29:11 np0005589310.novalocal sshd-session[7304]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:29:12 np0005589310.novalocal sudo[7384]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uauugsocjshbvhqexvzfclxkxymvkugb ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:29:12 np0005589310.novalocal sudo[7384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:29:12 np0005589310.novalocal python3[7386]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:29:12 np0005589310.novalocal sudo[7384]: pam_unix(sudo:session): session closed for user root
Jan 20 18:29:12 np0005589310.novalocal sudo[7457]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpxhwxwxmniuzkzkuuthpygiqvidbpgn ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 20 18:29:12 np0005589310.novalocal sudo[7457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:29:12 np0005589310.novalocal python3[7459]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/ansible-tmp-1768933752.1180925-267-225970933910862/source _original_basename=tmpta63ztqu follow=False checksum=28a61f56a02f2805646416fe6ddd7237f7944961 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:29:12 np0005589310.novalocal sudo[7457]: pam_unix(sudo:session): session closed for user root
Jan 20 18:29:15 np0005589310.novalocal sshd-session[7307]: Connection closed by 38.102.83.114 port 56758
Jan 20 18:29:15 np0005589310.novalocal sshd-session[7304]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:29:15 np0005589310.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 20 18:29:15 np0005589310.novalocal systemd-logind[797]: Session 3 logged out. Waiting for processes to exit.
Jan 20 18:29:15 np0005589310.novalocal systemd-logind[797]: Removed session 3.
Jan 20 18:30:48 np0005589310.novalocal sshd-session[7485]: Invalid user sol from 45.148.10.240 port 56314
Jan 20 18:30:48 np0005589310.novalocal sshd-session[7485]: Connection closed by invalid user sol 45.148.10.240 port 56314 [preauth]
Jan 20 18:31:50 np0005589310.novalocal systemd[4314]: Created slice User Background Tasks Slice.
Jan 20 18:31:50 np0005589310.novalocal systemd[4314]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 18:31:50 np0005589310.novalocal systemd[4314]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 18:31:51 np0005589310.novalocal sshd-session[7489]: Connection closed by 43.103.0.45 port 54968
Jan 20 18:33:02 np0005589310.novalocal sshd-session[7490]: Invalid user sol from 45.148.10.240 port 43452
Jan 20 18:33:02 np0005589310.novalocal sshd-session[7490]: Connection closed by invalid user sol 45.148.10.240 port 43452 [preauth]
Jan 20 18:35:15 np0005589310.novalocal sshd-session[7495]: Invalid user sol from 45.148.10.240 port 56184
Jan 20 18:35:15 np0005589310.novalocal sshd-session[7495]: Connection closed by invalid user sol 45.148.10.240 port 56184 [preauth]
Jan 20 18:36:44 np0005589310.novalocal sshd-session[7498]: Accepted publickey for zuul from 38.102.83.114 port 59842 ssh2: RSA SHA256:NUQhMT8WFYQNoBbXELd3vtykrkPErLT7OjFC/UP50jg
Jan 20 18:36:44 np0005589310.novalocal systemd-logind[797]: New session 4 of user zuul.
Jan 20 18:36:44 np0005589310.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 20 18:36:44 np0005589310.novalocal sshd-session[7498]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:36:44 np0005589310.novalocal sudo[7525]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctaxkvmerwskbdrxwumimajlhoqfqeeb ; /usr/bin/python3'
Jan 20 18:36:44 np0005589310.novalocal sudo[7525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:44 np0005589310.novalocal python3[7527]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-78e9-2ad2-00000000216f-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:44 np0005589310.novalocal sudo[7525]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:44 np0005589310.novalocal sudo[7554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkwwflgmyqqpfhcnkedvoeqqokkytzoo ; /usr/bin/python3'
Jan 20 18:36:44 np0005589310.novalocal sudo[7554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:45 np0005589310.novalocal python3[7556]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:45 np0005589310.novalocal sudo[7554]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:45 np0005589310.novalocal sudo[7580]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcsqnpdjmplowwkvmfnnnbmqjkwtjtxi ; /usr/bin/python3'
Jan 20 18:36:45 np0005589310.novalocal sudo[7580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:45 np0005589310.novalocal python3[7582]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:45 np0005589310.novalocal sudo[7580]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:45 np0005589310.novalocal sudo[7606]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xxxyjgdzrwqexeonngpyapjumbdtujfw ; /usr/bin/python3'
Jan 20 18:36:45 np0005589310.novalocal sudo[7606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:45 np0005589310.novalocal python3[7608]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:45 np0005589310.novalocal sudo[7606]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:45 np0005589310.novalocal sudo[7632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywkwkwyttetznnenxsxsmishrjuzxvcv ; /usr/bin/python3'
Jan 20 18:36:45 np0005589310.novalocal sudo[7632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:45 np0005589310.novalocal python3[7634]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:45 np0005589310.novalocal sudo[7632]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:46 np0005589310.novalocal sudo[7658]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcyxabtbohiugckofzqwbgjfkynbsglo ; /usr/bin/python3'
Jan 20 18:36:46 np0005589310.novalocal sudo[7658]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:46 np0005589310.novalocal python3[7660]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:46 np0005589310.novalocal sudo[7658]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:46 np0005589310.novalocal sudo[7736]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiwetqtsbuonclvuiubbwwqyfwksabdc ; /usr/bin/python3'
Jan 20 18:36:46 np0005589310.novalocal sudo[7736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:46 np0005589310.novalocal python3[7738]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:36:46 np0005589310.novalocal sudo[7736]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:46 np0005589310.novalocal sudo[7809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynvsfnyjzzwiskahulrqbjdcutoeafoj ; /usr/bin/python3'
Jan 20 18:36:46 np0005589310.novalocal sudo[7809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:47 np0005589310.novalocal python3[7811]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934206.4994287-498-233378964060786/source _original_basename=tmp694qpsgn follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:36:47 np0005589310.novalocal sudo[7809]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:47 np0005589310.novalocal sudo[7859]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyrcxqzbodwqoucjawkenuygqmrtener ; /usr/bin/python3'
Jan 20 18:36:47 np0005589310.novalocal sudo[7859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:48 np0005589310.novalocal python3[7861]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 18:36:48 np0005589310.novalocal systemd[1]: Reloading.
Jan 20 18:36:48 np0005589310.novalocal systemd-rc-local-generator[7880]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:36:48 np0005589310.novalocal sudo[7859]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:49 np0005589310.novalocal sudo[7915]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqfsaochofybtsahakqynxjyptmqgevi ; /usr/bin/python3'
Jan 20 18:36:49 np0005589310.novalocal sudo[7915]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:49 np0005589310.novalocal python3[7917]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 20 18:36:49 np0005589310.novalocal sudo[7915]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:50 np0005589310.novalocal sudo[7941]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbjakdmuhgikdmyjhjwwxrhwvrsfeoxk ; /usr/bin/python3'
Jan 20 18:36:50 np0005589310.novalocal sudo[7941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:50 np0005589310.novalocal python3[7943]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:50 np0005589310.novalocal sudo[7941]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:50 np0005589310.novalocal sudo[7969]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydfwkwhevekqqltqkcsczejdjarwstiv ; /usr/bin/python3'
Jan 20 18:36:50 np0005589310.novalocal sudo[7969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:50 np0005589310.novalocal python3[7971]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:50 np0005589310.novalocal sudo[7969]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:50 np0005589310.novalocal sudo[7997]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwkhvxijuflyxpdqzconpqlpspiohzul ; /usr/bin/python3'
Jan 20 18:36:50 np0005589310.novalocal sudo[7997]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:50 np0005589310.novalocal python3[7999]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:50 np0005589310.novalocal sudo[7997]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:50 np0005589310.novalocal sudo[8025]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwpkgwkchglroknexfzncoxbcczodbpq ; /usr/bin/python3'
Jan 20 18:36:50 np0005589310.novalocal sudo[8025]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:51 np0005589310.novalocal python3[8027]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:51 np0005589310.novalocal sudo[8025]: pam_unix(sudo:session): session closed for user root
Jan 20 18:36:51 np0005589310.novalocal python3[8054]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163ec2-ffbe-78e9-2ad2-000000002176-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:36:52 np0005589310.novalocal python3[8084]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 18:36:54 np0005589310.novalocal sshd-session[7501]: Connection closed by 38.102.83.114 port 59842
Jan 20 18:36:54 np0005589310.novalocal sshd-session[7498]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:36:54 np0005589310.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 20 18:36:54 np0005589310.novalocal systemd[1]: session-4.scope: Consumed 3.934s CPU time.
Jan 20 18:36:54 np0005589310.novalocal systemd-logind[797]: Session 4 logged out. Waiting for processes to exit.
Jan 20 18:36:54 np0005589310.novalocal systemd-logind[797]: Removed session 4.
Jan 20 18:36:56 np0005589310.novalocal sshd-session[8089]: Accepted publickey for zuul from 38.102.83.114 port 40604 ssh2: RSA SHA256:NUQhMT8WFYQNoBbXELd3vtykrkPErLT7OjFC/UP50jg
Jan 20 18:36:56 np0005589310.novalocal systemd-logind[797]: New session 5 of user zuul.
Jan 20 18:36:56 np0005589310.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 20 18:36:56 np0005589310.novalocal sshd-session[8089]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:36:56 np0005589310.novalocal sudo[8116]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqazbpejbjizvoohwzjjashvowulhhwp ; /usr/bin/python3'
Jan 20 18:36:56 np0005589310.novalocal sudo[8116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:36:56 np0005589310.novalocal python3[8118]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 18:36:58 np0005589310.novalocal irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 20 18:36:58 np0005589310.novalocal irqbalance[789]: IRQ 27 affinity is now unmanaged
Jan 20 18:37:05 np0005589310.novalocal setsebool[8163]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 20 18:37:05 np0005589310.novalocal setsebool[8163]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 20 18:37:17 np0005589310.novalocal kernel: SELinux:  Converting 383 SID table entries...
Jan 20 18:37:17 np0005589310.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:37:17 np0005589310.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 20 18:37:17 np0005589310.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:37:17 np0005589310.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:37:17 np0005589310.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:37:17 np0005589310.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:37:17 np0005589310.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:37:18 np0005589310.novalocal sshd-session[8121]: Connection closed by 167.94.138.58 port 38456 [preauth]
Jan 20 18:37:26 np0005589310.novalocal kernel: SELinux:  Converting 386 SID table entries...
Jan 20 18:37:26 np0005589310.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:37:26 np0005589310.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 20 18:37:26 np0005589310.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:37:26 np0005589310.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:37:26 np0005589310.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:37:26 np0005589310.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:37:26 np0005589310.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:37:32 np0005589310.novalocal sshd-session[8893]: Invalid user sol from 45.148.10.240 port 38848
Jan 20 18:37:32 np0005589310.novalocal sshd-session[8893]: Connection closed by invalid user sol 45.148.10.240 port 38848 [preauth]
Jan 20 18:37:43 np0005589310.novalocal dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 18:37:43 np0005589310.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:37:43 np0005589310.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:37:43 np0005589310.novalocal systemd[1]: Reloading.
Jan 20 18:37:43 np0005589310.novalocal systemd-rc-local-generator[8935]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:37:43 np0005589310.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:37:45 np0005589310.novalocal sudo[8116]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:48 np0005589310.novalocal python3[13087]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163ec2-ffbe-386d-653e-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:37:49 np0005589310.novalocal kernel: evm: overlay not supported
Jan 20 18:37:49 np0005589310.novalocal systemd[4314]: Starting D-Bus User Message Bus...
Jan 20 18:37:49 np0005589310.novalocal dbus-broker-launch[13919]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 20 18:37:49 np0005589310.novalocal dbus-broker-launch[13919]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 20 18:37:49 np0005589310.novalocal systemd[4314]: Started D-Bus User Message Bus.
Jan 20 18:37:49 np0005589310.novalocal dbus-broker-lau[13919]: Ready
Jan 20 18:37:49 np0005589310.novalocal systemd[4314]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 20 18:37:49 np0005589310.novalocal systemd[4314]: Created slice Slice /user.
Jan 20 18:37:49 np0005589310.novalocal systemd[4314]: podman-13851.scope: unit configures an IP firewall, but not running as root.
Jan 20 18:37:49 np0005589310.novalocal systemd[4314]: (This warning is only shown for the first unit using IP firewalling.)
Jan 20 18:37:49 np0005589310.novalocal systemd[4314]: Started podman-13851.scope.
Jan 20 18:37:50 np0005589310.novalocal systemd[4314]: Started podman-pause-30ff7b45.scope.
Jan 20 18:37:50 np0005589310.novalocal sudo[14032]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piivesakmcluqxadurdthewwalqeupdj ; /usr/bin/python3'
Jan 20 18:37:50 np0005589310.novalocal sudo[14032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:37:50 np0005589310.novalocal python3[14034]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.246:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.246:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:37:50 np0005589310.novalocal python3[14034]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 20 18:37:50 np0005589310.novalocal sudo[14032]: pam_unix(sudo:session): session closed for user root
Jan 20 18:37:51 np0005589310.novalocal sshd-session[8092]: Connection closed by 38.102.83.114 port 40604
Jan 20 18:37:51 np0005589310.novalocal sshd-session[8089]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:37:51 np0005589310.novalocal systemd[1]: session-5.scope: Deactivated successfully.
Jan 20 18:37:51 np0005589310.novalocal systemd[1]: session-5.scope: Consumed 41.268s CPU time.
Jan 20 18:37:51 np0005589310.novalocal systemd-logind[797]: Session 5 logged out. Waiting for processes to exit.
Jan 20 18:37:51 np0005589310.novalocal systemd-logind[797]: Removed session 5.
Jan 20 18:38:02 np0005589310.novalocal systemd[1]: Starting Cleanup of Temporary Directories...
Jan 20 18:38:02 np0005589310.novalocal systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 20 18:38:02 np0005589310.novalocal systemd[1]: Finished Cleanup of Temporary Directories.
Jan 20 18:38:02 np0005589310.novalocal systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 20 18:38:09 np0005589310.novalocal sshd-session[22623]: Connection closed by 38.102.83.180 port 57222 [preauth]
Jan 20 18:38:09 np0005589310.novalocal sshd-session[22630]: Connection closed by 38.102.83.180 port 57226 [preauth]
Jan 20 18:38:09 np0005589310.novalocal sshd-session[22629]: Unable to negotiate with 38.102.83.180 port 57242: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 20 18:38:09 np0005589310.novalocal sshd-session[22628]: Unable to negotiate with 38.102.83.180 port 57240: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 20 18:38:09 np0005589310.novalocal sshd-session[22626]: Unable to negotiate with 38.102.83.180 port 57254: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 20 18:38:13 np0005589310.novalocal sshd-session[24710]: Accepted publickey for zuul from 38.102.83.114 port 47808 ssh2: RSA SHA256:NUQhMT8WFYQNoBbXELd3vtykrkPErLT7OjFC/UP50jg
Jan 20 18:38:13 np0005589310.novalocal systemd-logind[797]: New session 6 of user zuul.
Jan 20 18:38:13 np0005589310.novalocal systemd[1]: Started Session 6 of User zuul.
Jan 20 18:38:13 np0005589310.novalocal sshd-session[24710]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:38:14 np0005589310.novalocal python3[24826]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpC8BQUUVe+S/xfNur/1J7ZxLnegLSyGFNXjeqwcF3o8RrsLEcuGdBmAMmxP8SjUaneFgOL7H3Pr6ghGA58O/0= zuul@np0005589309.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:38:14 np0005589310.novalocal sudo[24978]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugtscscuhdtcgsrrgnywylpcrypubbrh ; /usr/bin/python3'
Jan 20 18:38:14 np0005589310.novalocal sudo[24978]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:14 np0005589310.novalocal python3[24989]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpC8BQUUVe+S/xfNur/1J7ZxLnegLSyGFNXjeqwcF3o8RrsLEcuGdBmAMmxP8SjUaneFgOL7H3Pr6ghGA58O/0= zuul@np0005589309.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:38:14 np0005589310.novalocal sudo[24978]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:15 np0005589310.novalocal sudo[25395]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfiaiahcfxhhhbjypsoulouagpnwxtbr ; /usr/bin/python3'
Jan 20 18:38:15 np0005589310.novalocal sudo[25395]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:15 np0005589310.novalocal python3[25403]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005589310.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 20 18:38:15 np0005589310.novalocal useradd[25485]: new group: name=cloud-admin, GID=1002
Jan 20 18:38:15 np0005589310.novalocal useradd[25485]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 20 18:38:15 np0005589310.novalocal sudo[25395]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:15 np0005589310.novalocal sudo[25632]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zdkwzriclojhpojmfacwbozmjtknkary ; /usr/bin/python3'
Jan 20 18:38:15 np0005589310.novalocal sudo[25632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:15 np0005589310.novalocal python3[25640]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBpC8BQUUVe+S/xfNur/1J7ZxLnegLSyGFNXjeqwcF3o8RrsLEcuGdBmAMmxP8SjUaneFgOL7H3Pr6ghGA58O/0= zuul@np0005589309.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 20 18:38:15 np0005589310.novalocal sudo[25632]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:16 np0005589310.novalocal sudo[25929]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxcscareqnuhwtzwrsjfrpwznopkelql ; /usr/bin/python3'
Jan 20 18:38:16 np0005589310.novalocal sudo[25929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:16 np0005589310.novalocal python3[25939]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:38:16 np0005589310.novalocal sudo[25929]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:16 np0005589310.novalocal sudo[26206]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucksjdpxmgymgqybzxlirnggrnawpqyt ; /usr/bin/python3'
Jan 20 18:38:16 np0005589310.novalocal sudo[26206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:16 np0005589310.novalocal python3[26216]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768934296.058853-135-41333304131391/source _original_basename=tmptf82l9_a follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:38:16 np0005589310.novalocal sudo[26206]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:17 np0005589310.novalocal sudo[26586]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrgoqmxcipxduvnzdnaxwxrfstmeinpd ; /usr/bin/python3'
Jan 20 18:38:17 np0005589310.novalocal sudo[26586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:38:17 np0005589310.novalocal python3[26596]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 20 18:38:17 np0005589310.novalocal systemd[1]: Starting Hostname Service...
Jan 20 18:38:17 np0005589310.novalocal systemd[1]: Started Hostname Service.
Jan 20 18:38:17 np0005589310.novalocal systemd-hostnamed[26714]: Changed pretty hostname to 'compute-0'
Jan 20 18:38:17 compute-0 systemd-hostnamed[26714]: Hostname set to <compute-0> (static)
Jan 20 18:38:17 compute-0 NetworkManager[7195]: <info>  [1768934297.7834] hostname: static hostname changed from "np0005589310.novalocal" to "compute-0"
Jan 20 18:38:17 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:38:17 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:38:17 compute-0 sudo[26586]: pam_unix(sudo:session): session closed for user root
Jan 20 18:38:18 compute-0 sshd-session[24766]: Connection closed by 38.102.83.114 port 47808
Jan 20 18:38:18 compute-0 sshd-session[24710]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:38:18 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 20 18:38:18 compute-0 systemd[1]: session-6.scope: Consumed 2.184s CPU time.
Jan 20 18:38:18 compute-0 systemd-logind[797]: Session 6 logged out. Waiting for processes to exit.
Jan 20 18:38:18 compute-0 systemd-logind[797]: Removed session 6.
Jan 20 18:38:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:38:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:38:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 51.328s CPU time.
Jan 20 18:38:25 compute-0 systemd[1]: run-r3dfe753fb93748e9b72d93297ed76bf9.service: Deactivated successfully.
Jan 20 18:38:27 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:38:47 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 18:39:50 compute-0 systemd[1]: Starting dnf makecache...
Jan 20 18:39:50 compute-0 dnf[29958]: Failed determining last makecache time.
Jan 20 18:39:50 compute-0 dnf[29958]: CentOS Stream 9 - BaseOS                         28 kB/s | 6.4 kB     00:00
Jan 20 18:39:51 compute-0 dnf[29958]: CentOS Stream 9 - AppStream                      29 kB/s | 6.8 kB     00:00
Jan 20 18:39:51 compute-0 sshd-session[29960]: Invalid user sol from 45.148.10.240 port 43586
Jan 20 18:39:51 compute-0 sshd-session[29960]: Connection closed by invalid user sol 45.148.10.240 port 43586 [preauth]
Jan 20 18:39:51 compute-0 dnf[29958]: CentOS Stream 9 - CRB                            61 kB/s | 6.3 kB     00:00
Jan 20 18:39:51 compute-0 dnf[29958]: CentOS Stream 9 - Extras packages                63 kB/s | 7.3 kB     00:00
Jan 20 18:39:51 compute-0 dnf[29958]: Metadata cache created.
Jan 20 18:39:51 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 20 18:39:51 compute-0 systemd[1]: Finished dnf makecache.
Jan 20 18:42:00 compute-0 sshd-session[29968]: Accepted publickey for zuul from 38.102.83.180 port 35850 ssh2: RSA SHA256:NUQhMT8WFYQNoBbXELd3vtykrkPErLT7OjFC/UP50jg
Jan 20 18:42:00 compute-0 systemd-logind[797]: New session 7 of user zuul.
Jan 20 18:42:00 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 20 18:42:00 compute-0 sshd-session[29968]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:42:01 compute-0 python3[30044]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:42:02 compute-0 sudo[30158]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlnqxycybjaphcnzbrlfjvqkxxhblepk ; /usr/bin/python3'
Jan 20 18:42:02 compute-0 sudo[30158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:02 compute-0 python3[30160]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:02 compute-0 sudo[30158]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:03 compute-0 sudo[30231]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjhizbklvbueoviivhhhfkygvctcklir ; /usr/bin/python3'
Jan 20 18:42:03 compute-0 sudo[30231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:03 compute-0 python3[30233]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:03 compute-0 sudo[30231]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:03 compute-0 sudo[30257]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxkbglthexktdujesnnptpszgepqjxkg ; /usr/bin/python3'
Jan 20 18:42:03 compute-0 sudo[30257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:03 compute-0 python3[30259]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:03 compute-0 sudo[30257]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:03 compute-0 sudo[30330]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcstggnbctetnnwjtwpauxemvqcigoiu ; /usr/bin/python3'
Jan 20 18:42:03 compute-0 sudo[30330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:03 compute-0 python3[30332]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:03 compute-0 sudo[30330]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:04 compute-0 sudo[30356]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhrdphfebrdtsalmgzovrkgsajpxpchv ; /usr/bin/python3'
Jan 20 18:42:04 compute-0 sudo[30356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:04 compute-0 python3[30358]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:04 compute-0 sudo[30356]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:04 compute-0 sudo[30429]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hztdtkmcgdkmkppurjiyxyouykurulba ; /usr/bin/python3'
Jan 20 18:42:04 compute-0 sudo[30429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:04 compute-0 python3[30431]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:04 compute-0 sudo[30429]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:04 compute-0 sudo[30455]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxaauyztfcifewqdadzniugkjykucavo ; /usr/bin/python3'
Jan 20 18:42:04 compute-0 sudo[30455]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:04 compute-0 python3[30457]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:04 compute-0 sudo[30455]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:05 compute-0 sudo[30528]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iundtvsqjpfvgdjepktafsshtxlzeewz ; /usr/bin/python3'
Jan 20 18:42:05 compute-0 sudo[30528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:05 compute-0 python3[30530]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:05 compute-0 sudo[30528]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:05 compute-0 sudo[30554]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fniyotgjfvfrrjhiwjmaicuudgahving ; /usr/bin/python3'
Jan 20 18:42:05 compute-0 sudo[30554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:05 compute-0 python3[30556]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:05 compute-0 sudo[30554]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:05 compute-0 sudo[30627]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eazxhrkppgblhhhoemmkcbysgscobqka ; /usr/bin/python3'
Jan 20 18:42:05 compute-0 sudo[30627]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:05 compute-0 python3[30629]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:05 compute-0 sudo[30627]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:05 compute-0 sudo[30653]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncosoticcydxnzemyswnnsglagbghckn ; /usr/bin/python3'
Jan 20 18:42:05 compute-0 sudo[30653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:05 compute-0 python3[30655]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:05 compute-0 sudo[30653]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:06 compute-0 sudo[30726]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcvkkftwanrysoanzjgtonognocbavsz ; /usr/bin/python3'
Jan 20 18:42:06 compute-0 sudo[30726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:06 compute-0 python3[30728]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:06 compute-0 sudo[30726]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:06 compute-0 sudo[30752]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzbezqncxjrmcoxrnjkudttrfrjfydtp ; /usr/bin/python3'
Jan 20 18:42:06 compute-0 sudo[30752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:06 compute-0 python3[30754]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 18:42:06 compute-0 sudo[30752]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:06 compute-0 sudo[30825]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujzdpbjrcmhpqnfueruunsaaslxmomvf ; /usr/bin/python3'
Jan 20 18:42:06 compute-0 sudo[30825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:42:06 compute-0 python3[30827]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1768934522.632913-33587-152330011721161/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:42:06 compute-0 sudo[30825]: pam_unix(sudo:session): session closed for user root
Jan 20 18:42:09 compute-0 sshd-session[30853]: Connection closed by 192.168.122.11 port 34266 [preauth]
Jan 20 18:42:09 compute-0 sshd-session[30852]: Connection closed by 192.168.122.11 port 34262 [preauth]
Jan 20 18:42:09 compute-0 sshd-session[30854]: Unable to negotiate with 192.168.122.11 port 34278: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 20 18:42:09 compute-0 sshd-session[30855]: Unable to negotiate with 192.168.122.11 port 34294: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 20 18:42:09 compute-0 sshd-session[30857]: Unable to negotiate with 192.168.122.11 port 34298: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 20 18:42:12 compute-0 sshd-session[30862]: Invalid user user from 45.148.10.240 port 46826
Jan 20 18:42:12 compute-0 sshd-session[30862]: Connection closed by invalid user user 45.148.10.240 port 46826 [preauth]
Jan 20 18:42:21 compute-0 python3[30887]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:44:31 compute-0 sshd-session[30889]: Invalid user solv from 45.148.10.240 port 55154
Jan 20 18:44:31 compute-0 sshd-session[30889]: Connection closed by invalid user solv 45.148.10.240 port 55154 [preauth]
Jan 20 18:46:46 compute-0 sshd-session[30892]: Invalid user solv from 45.148.10.240 port 53594
Jan 20 18:46:46 compute-0 sshd-session[30892]: Connection closed by invalid user solv 45.148.10.240 port 53594 [preauth]
Jan 20 18:47:20 compute-0 sshd-session[29971]: Received disconnect from 38.102.83.180 port 35850:11: disconnected by user
Jan 20 18:47:20 compute-0 sshd-session[29971]: Disconnected from user zuul 38.102.83.180 port 35850
Jan 20 18:47:20 compute-0 sshd-session[29968]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:47:20 compute-0 systemd-logind[797]: Session 7 logged out. Waiting for processes to exit.
Jan 20 18:47:20 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 20 18:47:20 compute-0 systemd[1]: session-7.scope: Consumed 4.558s CPU time.
Jan 20 18:47:20 compute-0 systemd-logind[797]: Removed session 7.
Jan 20 18:49:00 compute-0 sshd-session[30895]: Invalid user solv from 45.148.10.240 port 38356
Jan 20 18:49:00 compute-0 sshd-session[30895]: Connection closed by invalid user solv 45.148.10.240 port 38356 [preauth]
Jan 20 18:49:45 compute-0 sshd-session[30897]: Invalid user nginx from 58.82.169.249 port 52254
Jan 20 18:49:46 compute-0 sshd-session[30897]: Received disconnect from 58.82.169.249 port 52254:11:  [preauth]
Jan 20 18:49:46 compute-0 sshd-session[30897]: Disconnected from invalid user nginx 58.82.169.249 port 52254 [preauth]
Jan 20 18:51:15 compute-0 sshd-session[30899]: Invalid user solv from 45.148.10.240 port 52202
Jan 20 18:51:16 compute-0 sshd-session[30899]: Connection closed by invalid user solv 45.148.10.240 port 52202 [preauth]
Jan 20 18:52:24 compute-0 sshd-session[30902]: Connection closed by 14.63.166.251 port 37579 [preauth]
Jan 20 18:53:10 compute-0 sshd-session[30904]: Accepted publickey for zuul from 192.168.122.30 port 52344 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 18:53:10 compute-0 systemd-logind[797]: New session 8 of user zuul.
Jan 20 18:53:10 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 20 18:53:10 compute-0 sshd-session[30904]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:53:11 compute-0 python3.9[31057]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:53:12 compute-0 sudo[31236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwnspqwexhuaosahxrvpjfbyavoukbnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935192.2538834-27-198751319867197/AnsiballZ_command.py'
Jan 20 18:53:12 compute-0 sudo[31236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:12 compute-0 python3.9[31238]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:53:19 compute-0 sudo[31236]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:20 compute-0 sshd-session[30907]: Connection closed by 192.168.122.30 port 52344
Jan 20 18:53:20 compute-0 sshd-session[30904]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:53:20 compute-0 systemd-logind[797]: Session 8 logged out. Waiting for processes to exit.
Jan 20 18:53:20 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 20 18:53:20 compute-0 systemd[1]: session-8.scope: Consumed 7.385s CPU time.
Jan 20 18:53:20 compute-0 systemd-logind[797]: Removed session 8.
Jan 20 18:53:32 compute-0 sshd-session[31298]: Invalid user solv from 45.148.10.240 port 57904
Jan 20 18:53:32 compute-0 sshd-session[31298]: Connection closed by invalid user solv 45.148.10.240 port 57904 [preauth]
Jan 20 18:53:35 compute-0 sshd-session[31300]: Accepted publickey for zuul from 192.168.122.30 port 58550 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 18:53:35 compute-0 systemd-logind[797]: New session 9 of user zuul.
Jan 20 18:53:35 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 20 18:53:35 compute-0 sshd-session[31300]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:53:36 compute-0 python3.9[31453]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 18:53:37 compute-0 python3.9[31627]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:53:38 compute-0 sudo[31777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otgnuztcocfgspfshrdxhaiwngunttia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935217.6047626-40-123648215430740/AnsiballZ_command.py'
Jan 20 18:53:38 compute-0 sudo[31777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:38 compute-0 python3.9[31779]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:53:38 compute-0 sudo[31777]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:38 compute-0 sudo[31930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iayznjalerubfpzrlqziyjnnaqmnauzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935218.445758-52-140755371098625/AnsiballZ_stat.py'
Jan 20 18:53:38 compute-0 sudo[31930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:38 compute-0 python3.9[31932]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:53:38 compute-0 sudo[31930]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:39 compute-0 sudo[32082]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pruironcqhzfublarqcmjosabmokphfo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935219.1283777-60-82497220703024/AnsiballZ_file.py'
Jan 20 18:53:39 compute-0 sudo[32082]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:39 compute-0 python3.9[32084]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:39 compute-0 sudo[32082]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:40 compute-0 sudo[32234]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csxodzcjkpqrzfddhhlilqpdnszrhhyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935219.8563318-68-245528657497906/AnsiballZ_stat.py'
Jan 20 18:53:40 compute-0 sudo[32234]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:40 compute-0 python3.9[32236]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:53:40 compute-0 sudo[32234]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:41 compute-0 sudo[32357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utdwlzgzaoudhbdhqrpxpvllbugbcmxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935219.8563318-68-245528657497906/AnsiballZ_copy.py'
Jan 20 18:53:41 compute-0 sudo[32357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:41 compute-0 python3.9[32359]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935219.8563318-68-245528657497906/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:41 compute-0 sudo[32357]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:41 compute-0 sudo[32509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnfxjujnlccnwmiyfwtqjfkhfbarmryw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935221.6605444-83-170599736704643/AnsiballZ_setup.py'
Jan 20 18:53:41 compute-0 sudo[32509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:42 compute-0 python3.9[32511]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:53:42 compute-0 sudo[32509]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:42 compute-0 sudo[32665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehfatgrhxcmkxwyjxftmchpsskblbaiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935222.521833-91-171571669275460/AnsiballZ_file.py'
Jan 20 18:53:42 compute-0 sudo[32665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:42 compute-0 python3.9[32667]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:53:42 compute-0 sudo[32665]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:43 compute-0 sudo[32817]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phieudvxamdctycbxwufpklnejtvwmiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935223.1317298-100-276005490330974/AnsiballZ_file.py'
Jan 20 18:53:43 compute-0 sudo[32817]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:43 compute-0 python3.9[32819]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:53:43 compute-0 sudo[32817]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:44 compute-0 python3.9[32969]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:53:49 compute-0 python3.9[33222]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:53:50 compute-0 python3.9[33372]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:53:51 compute-0 python3.9[33526]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:53:51 compute-0 sudo[33682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dukaygiocwdbqcizinklovqlxpxcwzxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935231.5683892-148-144663877208494/AnsiballZ_setup.py'
Jan 20 18:53:51 compute-0 sudo[33682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:52 compute-0 python3.9[33684]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:53:52 compute-0 sudo[33682]: pam_unix(sudo:session): session closed for user root
Jan 20 18:53:52 compute-0 sudo[33766]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwwjknzhxkfxygocazacjoedwwpktmta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935231.5683892-148-144663877208494/AnsiballZ_dnf.py'
Jan 20 18:53:52 compute-0 sudo[33766]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:53:53 compute-0 python3.9[33768]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:54:43 compute-0 systemd[1]: Reloading.
Jan 20 18:54:43 compute-0 systemd-rc-local-generator[33967]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:54:43 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 20 18:54:44 compute-0 systemd[1]: Reloading.
Jan 20 18:54:44 compute-0 systemd-rc-local-generator[34009]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:54:44 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 20 18:54:44 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 20 18:54:44 compute-0 systemd[1]: Reloading.
Jan 20 18:54:44 compute-0 systemd-rc-local-generator[34047]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:54:44 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 20 18:54:44 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 18:54:44 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 18:54:44 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 18:55:53 compute-0 sshd-session[34251]: Invalid user validator from 45.148.10.240 port 59264
Jan 20 18:55:53 compute-0 sshd-session[34251]: Connection closed by invalid user validator 45.148.10.240 port 59264 [preauth]
Jan 20 18:56:01 compute-0 sshd-session[34258]: Received disconnect from 36.137.141.10 port 43300:11:  [preauth]
Jan 20 18:56:01 compute-0 sshd-session[34258]: Disconnected from authenticating user root 36.137.141.10 port 43300 [preauth]
Jan 20 18:56:01 compute-0 kernel: SELinux:  Converting 2722 SID table entries...
Jan 20 18:56:01 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:56:01 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:56:01 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:56:01 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:56:01 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:56:01 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:56:01 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:56:01 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 20 18:56:01 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:56:01 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:56:01 compute-0 systemd[1]: Reloading.
Jan 20 18:56:01 compute-0 systemd-rc-local-generator[34374]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:01 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:56:02 compute-0 sudo[33766]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:02 compute-0 sudo[35282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tfrajnbrpifnrzugpubxevdcqkybwmuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935362.34699-160-164569294473227/AnsiballZ_command.py'
Jan 20 18:56:02 compute-0 sudo[35282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:02 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:56:02 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:56:02 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.121s CPU time.
Jan 20 18:56:02 compute-0 systemd[1]: run-rb123df4f28dc49dfaa370554c2e9c029.service: Deactivated successfully.
Jan 20 18:56:02 compute-0 python3.9[35284]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:56:03 compute-0 sudo[35282]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:04 compute-0 sudo[35564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-moltvphplsbpvzczthrnkiyjatagkcpf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935363.6582885-168-221714304330820/AnsiballZ_selinux.py'
Jan 20 18:56:04 compute-0 sudo[35564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:04 compute-0 python3.9[35566]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 18:56:04 compute-0 sudo[35564]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:05 compute-0 sudo[35716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeaokqrpvinfqgzyzxvgetoeskkalzyx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935364.8792372-179-252592410373673/AnsiballZ_command.py'
Jan 20 18:56:05 compute-0 sudo[35716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:05 compute-0 python3.9[35718]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 18:56:06 compute-0 sudo[35716]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:07 compute-0 sudo[35869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjwapjtjwzadofdhlmchgacluceegvja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935366.8206596-187-57366978259736/AnsiballZ_file.py'
Jan 20 18:56:07 compute-0 sudo[35869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:08 compute-0 python3.9[35871]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:56:08 compute-0 sudo[35869]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:09 compute-0 sudo[36021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtzqgorctrvqyvyrorebzmhoomyoujup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935368.4115064-195-242842062633614/AnsiballZ_mount.py'
Jan 20 18:56:09 compute-0 sudo[36021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:09 compute-0 python3.9[36023]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 18:56:09 compute-0 sudo[36021]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:10 compute-0 sudo[36173]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jhnwvotvzvmecroesnpbixhhmdutdivl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935370.0786824-223-41107844200967/AnsiballZ_file.py'
Jan 20 18:56:10 compute-0 sudo[36173]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:10 compute-0 python3.9[36175]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:56:10 compute-0 sudo[36173]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:11 compute-0 sudo[36325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjolyzcfdilwkgcydydzivosmjberpzx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935370.7688947-231-39266725487616/AnsiballZ_stat.py'
Jan 20 18:56:11 compute-0 sudo[36325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:11 compute-0 python3.9[36327]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:56:11 compute-0 sudo[36325]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:11 compute-0 sudo[36448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvdyewpdsubxiyhhvukrecukjvimntbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935370.7688947-231-39266725487616/AnsiballZ_copy.py'
Jan 20 18:56:11 compute-0 sudo[36448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:11 compute-0 python3.9[36450]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935370.7688947-231-39266725487616/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:56:11 compute-0 sudo[36448]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:12 compute-0 sudo[36600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iogymravrspbhdykikpxyyaxpzztdbdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935372.2861743-255-196028004526184/AnsiballZ_stat.py'
Jan 20 18:56:12 compute-0 sudo[36600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:13 compute-0 python3.9[36602]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:56:13 compute-0 sudo[36600]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:13 compute-0 sudo[36752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmdzxbmhkhugzckzsnmtzhytrwbqtfkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935373.6982348-263-9337404801040/AnsiballZ_command.py'
Jan 20 18:56:13 compute-0 sudo[36752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:15 compute-0 python3.9[36754]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:56:15 compute-0 sudo[36752]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:16 compute-0 sudo[36906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoupffgnvyyepfsgmylqxmsnrrvnmsjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935376.0000722-271-164303696788480/AnsiballZ_file.py'
Jan 20 18:56:16 compute-0 sudo[36906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:16 compute-0 python3.9[36908]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:56:16 compute-0 sudo[36906]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:17 compute-0 sudo[37058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgskwwzfyzgjmppvcfurophgljrfpjci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935376.765985-282-195435175428654/AnsiballZ_getent.py'
Jan 20 18:56:17 compute-0 sudo[37058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:18 compute-0 python3.9[37060]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 18:56:18 compute-0 sudo[37058]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:18 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 18:56:18 compute-0 sudo[37212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hewqzvyxzaukyprnshrynlisxsducrod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935378.3997176-290-105770055737964/AnsiballZ_group.py'
Jan 20 18:56:18 compute-0 sudo[37212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:20 compute-0 python3.9[37214]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 18:56:20 compute-0 groupadd[37215]: group added to /etc/group: name=qemu, GID=107
Jan 20 18:56:20 compute-0 groupadd[37215]: group added to /etc/gshadow: name=qemu
Jan 20 18:56:20 compute-0 groupadd[37215]: new group: name=qemu, GID=107
Jan 20 18:56:20 compute-0 sudo[37212]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:20 compute-0 sudo[37370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egfyrxcrxhgvfxokomxaxjjehizutozw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935380.4293406-298-238007148380773/AnsiballZ_user.py'
Jan 20 18:56:20 compute-0 sudo[37370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:21 compute-0 python3.9[37372]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 18:56:21 compute-0 useradd[37374]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 18:56:21 compute-0 sudo[37370]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:21 compute-0 sudo[37530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjngkiklzczbvmelgywdhbxdqoqvscss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935381.5768898-306-278749058800976/AnsiballZ_getent.py'
Jan 20 18:56:21 compute-0 sudo[37530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:22 compute-0 python3.9[37532]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 18:56:22 compute-0 sudo[37530]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:22 compute-0 sudo[37683]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acjbimzgtuzgxijdylctqtxxadplwkpd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935382.223562-314-154991720216771/AnsiballZ_group.py'
Jan 20 18:56:22 compute-0 sudo[37683]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:22 compute-0 python3.9[37685]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 18:56:23 compute-0 groupadd[37686]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 20 18:56:23 compute-0 groupadd[37686]: group added to /etc/gshadow: name=hugetlbfs
Jan 20 18:56:23 compute-0 groupadd[37686]: new group: name=hugetlbfs, GID=42477
Jan 20 18:56:23 compute-0 sudo[37683]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:23 compute-0 sudo[37841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mwlqdutwvfqfmgboeopbcreyxsbplrav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935383.3182368-323-90267562711382/AnsiballZ_file.py'
Jan 20 18:56:23 compute-0 sudo[37841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:23 compute-0 python3.9[37843]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 18:56:23 compute-0 sudo[37841]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:24 compute-0 sudo[37993]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yidhwgfxqiqdnncuzptysidailjbypha ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935384.0722027-334-262560433632527/AnsiballZ_dnf.py'
Jan 20 18:56:24 compute-0 sudo[37993]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:24 compute-0 python3.9[37995]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:56:26 compute-0 sudo[37993]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:27 compute-0 sudo[38146]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sugpgfxrvxerhtmvusxxyitqsxhmdksx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935386.8591087-342-126778315104695/AnsiballZ_file.py'
Jan 20 18:56:27 compute-0 sudo[38146]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:27 compute-0 python3.9[38148]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:56:27 compute-0 sudo[38146]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:27 compute-0 sudo[38298]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzgbxaybfchaxduuvxduzakbfvvmuwjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935387.4535568-350-72672085320745/AnsiballZ_stat.py'
Jan 20 18:56:27 compute-0 sudo[38298]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:27 compute-0 python3.9[38300]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:56:27 compute-0 sudo[38298]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:28 compute-0 sudo[38421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwltypqbfwhzyeiutttwnqitkhawrvba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935387.4535568-350-72672085320745/AnsiballZ_copy.py'
Jan 20 18:56:28 compute-0 sudo[38421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:28 compute-0 python3.9[38423]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935387.4535568-350-72672085320745/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:56:28 compute-0 sudo[38421]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:29 compute-0 sudo[38573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alblkgrcirgcatlzqtjfmvruvslfdupk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935388.6877453-365-56398726217113/AnsiballZ_systemd.py'
Jan 20 18:56:29 compute-0 sudo[38573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:29 compute-0 python3.9[38575]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:56:30 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 18:56:30 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 20 18:56:30 compute-0 kernel: Bridge firewalling registered
Jan 20 18:56:30 compute-0 systemd-modules-load[38579]: Inserted module 'br_netfilter'
Jan 20 18:56:30 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 18:56:30 compute-0 sudo[38573]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:31 compute-0 sudo[38733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kozobhdlnazliyuzcnyusamkbwropqrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935390.8282022-373-199954707583882/AnsiballZ_stat.py'
Jan 20 18:56:31 compute-0 sudo[38733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:31 compute-0 python3.9[38735]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:56:31 compute-0 sudo[38733]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:31 compute-0 sudo[38856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teqtqfmmsozzrazmchfbxlibyxfdvcgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935390.8282022-373-199954707583882/AnsiballZ_copy.py'
Jan 20 18:56:31 compute-0 sudo[38856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:31 compute-0 python3.9[38858]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935390.8282022-373-199954707583882/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:56:31 compute-0 sudo[38856]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:32 compute-0 sudo[39008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmyhfkrtjeubvonglifdjuarhdjtanqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935392.059238-391-176923372892620/AnsiballZ_dnf.py'
Jan 20 18:56:32 compute-0 sudo[39008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:32 compute-0 python3.9[39010]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:56:35 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 18:56:35 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 18:56:36 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:56:36 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:56:36 compute-0 systemd[1]: Reloading.
Jan 20 18:56:36 compute-0 systemd-rc-local-generator[39072]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:36 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:56:38 compute-0 sudo[39008]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:38 compute-0 python3.9[40202]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:56:39 compute-0 python3.9[41126]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 18:56:40 compute-0 python3.9[42108]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:56:41 compute-0 sudo[43020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyjdlpogicyakaacawoonkoymeopcmeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935400.7500384-430-270768978054135/AnsiballZ_command.py'
Jan 20 18:56:41 compute-0 sudo[43020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:41 compute-0 python3.9[43042]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:56:41 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 18:56:41 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:56:41 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:56:41 compute-0 systemd[1]: man-db-cache-update.service: Consumed 4.593s CPU time.
Jan 20 18:56:41 compute-0 systemd[1]: run-r0a68c49366404f70ac4684f2acfd1cf8.service: Deactivated successfully.
Jan 20 18:56:41 compute-0 systemd[1]: Starting Authorization Manager...
Jan 20 18:56:41 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 18:56:41 compute-0 polkitd[43397]: Started polkitd version 0.117
Jan 20 18:56:41 compute-0 polkitd[43397]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 18:56:41 compute-0 polkitd[43397]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 18:56:41 compute-0 polkitd[43397]: Finished loading, compiling and executing 2 rules
Jan 20 18:56:41 compute-0 systemd[1]: Started Authorization Manager.
Jan 20 18:56:41 compute-0 polkitd[43397]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 20 18:56:41 compute-0 sudo[43020]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:42 compute-0 sudo[43565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzwywmrvvyxjsrzlmoljjukusacyjaei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935402.07208-439-8111682573270/AnsiballZ_systemd.py'
Jan 20 18:56:42 compute-0 sudo[43565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:42 compute-0 python3.9[43567]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:56:42 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 18:56:42 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 18:56:42 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 18:56:42 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 18:56:42 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 18:56:42 compute-0 sudo[43565]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:43 compute-0 python3.9[43728]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 18:56:45 compute-0 sudo[43878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sdkfvgncmflmxiqpimfltgipkkakiaar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935405.145218-496-243392591841163/AnsiballZ_systemd.py'
Jan 20 18:56:45 compute-0 sudo[43878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:45 compute-0 python3.9[43880]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:56:45 compute-0 systemd[1]: Reloading.
Jan 20 18:56:45 compute-0 systemd-rc-local-generator[43909]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:45 compute-0 sudo[43878]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:46 compute-0 sudo[44067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxujbwrkqryedcnrkjdtbnkwgffnixfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935406.0423386-496-185619515443745/AnsiballZ_systemd.py'
Jan 20 18:56:46 compute-0 sudo[44067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:46 compute-0 python3.9[44069]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:56:46 compute-0 systemd[1]: Reloading.
Jan 20 18:56:46 compute-0 systemd-rc-local-generator[44099]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:56:46 compute-0 sudo[44067]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:47 compute-0 sudo[44256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvpmnjnrmgxlqtuauvjuwpkfsumrfrsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935407.13726-512-177233882557894/AnsiballZ_command.py'
Jan 20 18:56:47 compute-0 sudo[44256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:47 compute-0 python3.9[44258]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:56:47 compute-0 sudo[44256]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:48 compute-0 sudo[44409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnsaprihdkfpvvhcxflckogesozienld ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935407.821508-520-36490898994906/AnsiballZ_command.py'
Jan 20 18:56:48 compute-0 sudo[44409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:48 compute-0 python3.9[44411]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:56:48 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 20 18:56:48 compute-0 sudo[44409]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:48 compute-0 sudo[44562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krixzqfzdchtxcdxlcofbckcnojgmwyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935408.4898412-528-75627134285668/AnsiballZ_command.py'
Jan 20 18:56:48 compute-0 sudo[44562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:48 compute-0 python3.9[44564]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:56:50 compute-0 sudo[44562]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:50 compute-0 sudo[44724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdxlfpxrgcvcjpvyuqnenubmqyklfojz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935410.516637-536-143533926453241/AnsiballZ_command.py'
Jan 20 18:56:50 compute-0 sudo[44724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:51 compute-0 python3.9[44726]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:56:51 compute-0 sudo[44724]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:51 compute-0 sudo[44877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgjswyvqfrwxlyqeomqxmfkffkgjbhfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935411.1748538-544-20582153818195/AnsiballZ_systemd.py'
Jan 20 18:56:51 compute-0 sudo[44877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:56:51 compute-0 python3.9[44879]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:56:51 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 20 18:56:51 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 20 18:56:51 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 20 18:56:51 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 20 18:56:51 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 20 18:56:51 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 20 18:56:51 compute-0 sudo[44877]: pam_unix(sudo:session): session closed for user root
Jan 20 18:56:52 compute-0 sshd-session[31303]: Connection closed by 192.168.122.30 port 58550
Jan 20 18:56:52 compute-0 sshd-session[31300]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:56:52 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 20 18:56:52 compute-0 systemd[1]: session-9.scope: Consumed 2min 12.849s CPU time.
Jan 20 18:56:52 compute-0 systemd-logind[797]: Session 9 logged out. Waiting for processes to exit.
Jan 20 18:56:52 compute-0 systemd-logind[797]: Removed session 9.
Jan 20 18:56:58 compute-0 sshd-session[44909]: Accepted publickey for zuul from 192.168.122.30 port 51954 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 18:56:58 compute-0 systemd-logind[797]: New session 10 of user zuul.
Jan 20 18:56:58 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 20 18:56:58 compute-0 sshd-session[44909]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:56:59 compute-0 python3.9[45062]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:57:00 compute-0 sudo[45216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjlxzwhazlhokqhyicndgztrmlfbpjef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935420.2815037-31-264825957117059/AnsiballZ_getent.py'
Jan 20 18:57:00 compute-0 sudo[45216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:00 compute-0 python3.9[45218]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 18:57:00 compute-0 sudo[45216]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:01 compute-0 sudo[45369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odokrmzlccadmvnodikhsiuvukhpnmpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935421.0234144-39-5020425881915/AnsiballZ_group.py'
Jan 20 18:57:01 compute-0 sudo[45369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:01 compute-0 python3.9[45371]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 18:57:01 compute-0 groupadd[45372]: group added to /etc/group: name=openvswitch, GID=42476
Jan 20 18:57:01 compute-0 groupadd[45372]: group added to /etc/gshadow: name=openvswitch
Jan 20 18:57:01 compute-0 groupadd[45372]: new group: name=openvswitch, GID=42476
Jan 20 18:57:01 compute-0 sudo[45369]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:02 compute-0 sudo[45527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgokihglyovymmkweclfxytjxwqtbxyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935421.8844056-47-108041679948185/AnsiballZ_user.py'
Jan 20 18:57:02 compute-0 sudo[45527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:02 compute-0 python3.9[45529]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 18:57:02 compute-0 useradd[45531]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 18:57:02 compute-0 useradd[45531]: add 'openvswitch' to group 'hugetlbfs'
Jan 20 18:57:02 compute-0 useradd[45531]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 20 18:57:02 compute-0 sudo[45527]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:03 compute-0 sudo[45687]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvvqndynzodcvyvaaydtqtpbnrywnnxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935422.8962328-57-85722860912112/AnsiballZ_setup.py'
Jan 20 18:57:03 compute-0 sudo[45687]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:03 compute-0 python3.9[45689]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:57:03 compute-0 sudo[45687]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:04 compute-0 sudo[45771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbhhytohjimfnmeklnvbclgywiemouea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935422.8962328-57-85722860912112/AnsiballZ_dnf.py'
Jan 20 18:57:04 compute-0 sudo[45771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:04 compute-0 python3.9[45773]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 18:57:07 compute-0 sudo[45771]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:07 compute-0 sudo[45934]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qndbtaryoyofwiueyqlnzvimurkfzdgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935427.5939715-71-219139155854209/AnsiballZ_dnf.py'
Jan 20 18:57:07 compute-0 sudo[45934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:08 compute-0 python3.9[45936]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:57:22 compute-0 kernel: SELinux:  Converting 2735 SID table entries...
Jan 20 18:57:22 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:57:22 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:57:22 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:57:22 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:57:22 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:57:22 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:57:22 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:57:22 compute-0 groupadd[45959]: group added to /etc/group: name=unbound, GID=994
Jan 20 18:57:22 compute-0 groupadd[45959]: group added to /etc/gshadow: name=unbound
Jan 20 18:57:22 compute-0 groupadd[45959]: new group: name=unbound, GID=994
Jan 20 18:57:22 compute-0 useradd[45966]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 20 18:57:22 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 20 18:57:22 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 20 18:57:25 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:57:25 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:57:25 compute-0 systemd[1]: Reloading.
Jan 20 18:57:25 compute-0 systemd-rc-local-generator[46463]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:57:25 compute-0 systemd-sysv-generator[46466]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:57:25 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:57:25 compute-0 sudo[45934]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:57:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:57:25 compute-0 systemd[1]: run-ra849372f00b04c8bbe297f2d8b287318.service: Deactivated successfully.
Jan 20 18:57:26 compute-0 sudo[47032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiwxkewpxrkthiovpynxbreocxfuqnqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935445.9912932-79-204705979386141/AnsiballZ_systemd.py'
Jan 20 18:57:26 compute-0 sudo[47032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:26 compute-0 python3.9[47034]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 18:57:26 compute-0 systemd[1]: Reloading.
Jan 20 18:57:26 compute-0 systemd-rc-local-generator[47062]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:57:27 compute-0 systemd-sysv-generator[47066]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:57:27 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 20 18:57:27 compute-0 chown[47076]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 20 18:57:27 compute-0 ovs-ctl[47081]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 20 18:57:27 compute-0 ovs-ctl[47081]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 20 18:57:27 compute-0 ovs-ctl[47081]: Starting ovsdb-server [  OK  ]
Jan 20 18:57:27 compute-0 ovs-vsctl[47130]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 20 18:57:27 compute-0 ovs-vsctl[47150]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"15f2b046-37e6-488b-9e52-3d187c798598\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 20 18:57:27 compute-0 ovs-ctl[47081]: Configuring Open vSwitch system IDs [  OK  ]
Jan 20 18:57:27 compute-0 ovs-ctl[47081]: Enabling remote OVSDB managers [  OK  ]
Jan 20 18:57:27 compute-0 ovs-vsctl[47156]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 20 18:57:27 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 20 18:57:27 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 20 18:57:27 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 20 18:57:27 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 20 18:57:27 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 20 18:57:27 compute-0 ovs-ctl[47200]: Inserting openvswitch module [  OK  ]
Jan 20 18:57:27 compute-0 ovs-ctl[47169]: Starting ovs-vswitchd [  OK  ]
Jan 20 18:57:27 compute-0 ovs-ctl[47169]: Enabling remote OVSDB managers [  OK  ]
Jan 20 18:57:27 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 20 18:57:27 compute-0 ovs-vsctl[47218]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 20 18:57:27 compute-0 systemd[1]: Starting Open vSwitch...
Jan 20 18:57:27 compute-0 systemd[1]: Finished Open vSwitch.
Jan 20 18:57:27 compute-0 sudo[47032]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:28 compute-0 python3.9[47369]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:57:29 compute-0 sudo[47519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrxtxgbmlmgpxduwertnktfbpgkqyyta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935448.9362805-97-120290840556059/AnsiballZ_sefcontext.py'
Jan 20 18:57:29 compute-0 sudo[47519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:29 compute-0 python3.9[47521]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 18:57:30 compute-0 kernel: SELinux:  Converting 2749 SID table entries...
Jan 20 18:57:30 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 18:57:30 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 18:57:30 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 18:57:30 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 18:57:30 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 18:57:30 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 18:57:30 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 18:57:31 compute-0 sudo[47519]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:31 compute-0 python3.9[47676]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:57:32 compute-0 sudo[47832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdlrymipsgahoveuyfcoiqftnhwnfeql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935452.0863547-115-191164180950391/AnsiballZ_dnf.py'
Jan 20 18:57:32 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 20 18:57:32 compute-0 sudo[47832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:32 compute-0 python3.9[47834]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:57:33 compute-0 sudo[47832]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:34 compute-0 sudo[47985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdsledgldwhmwskpxbmzneafvcmxtftk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935453.9926255-123-196206179861117/AnsiballZ_command.py'
Jan 20 18:57:34 compute-0 sudo[47985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:34 compute-0 python3.9[47987]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:57:35 compute-0 sudo[47985]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:35 compute-0 sudo[48272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bndqazjljjqbgoajrblqmkxvyhykshgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935455.3816934-131-179330554516507/AnsiballZ_file.py'
Jan 20 18:57:35 compute-0 sudo[48272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:35 compute-0 python3.9[48274]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 18:57:35 compute-0 sudo[48272]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:36 compute-0 python3.9[48425]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:57:37 compute-0 sudo[48577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psceqvpkjonrjbezslibrrchxygwffhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935456.878801-147-53029427892137/AnsiballZ_dnf.py'
Jan 20 18:57:37 compute-0 sudo[48577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:37 compute-0 python3.9[48579]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:57:40 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:57:40 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:57:40 compute-0 systemd[1]: Reloading.
Jan 20 18:57:40 compute-0 systemd-rc-local-generator[48620]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:57:40 compute-0 systemd-sysv-generator[48625]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:57:40 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:57:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:57:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:57:40 compute-0 systemd[1]: run-rbea9b80d3d0943aa97d2cf9a3f3c8ead.service: Deactivated successfully.
Jan 20 18:57:40 compute-0 sudo[48577]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:41 compute-0 sudo[48896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-azopapnuhgjraeizmihlplmyuuyrapnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935460.913582-155-69223598620021/AnsiballZ_systemd.py'
Jan 20 18:57:41 compute-0 sudo[48896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:41 compute-0 python3.9[48898]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:57:41 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 20 18:57:41 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 20 18:57:41 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 20 18:57:41 compute-0 systemd[1]: Stopping Network Manager...
Jan 20 18:57:41 compute-0 NetworkManager[7195]: <info>  [1768935461.5699] caught SIGTERM, shutting down normally.
Jan 20 18:57:41 compute-0 NetworkManager[7195]: <info>  [1768935461.5712] dhcp4 (eth0): canceled DHCP transaction
Jan 20 18:57:41 compute-0 NetworkManager[7195]: <info>  [1768935461.5712] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:57:41 compute-0 NetworkManager[7195]: <info>  [1768935461.5712] dhcp4 (eth0): state changed no lease
Jan 20 18:57:41 compute-0 NetworkManager[7195]: <info>  [1768935461.5714] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 18:57:41 compute-0 NetworkManager[7195]: <info>  [1768935461.5780] exiting (success)
Jan 20 18:57:41 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:57:41 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:57:41 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 20 18:57:41 compute-0 systemd[1]: Stopped Network Manager.
Jan 20 18:57:41 compute-0 systemd[1]: NetworkManager.service: Consumed 10.944s CPU time, 4.4M memory peak, read 0B from disk, written 21.5K to disk.
Jan 20 18:57:41 compute-0 systemd[1]: Starting Network Manager...
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.6489] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:67fc3c9d-8ab5-4c8d-ad06-0b5b4ad77266)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.6492] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.6543] manager[0x55d9d0041000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 20 18:57:41 compute-0 systemd[1]: Starting Hostname Service...
Jan 20 18:57:41 compute-0 systemd[1]: Started Hostname Service.
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7346] hostname: hostname: using hostnamed
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7346] hostname: static hostname changed from (none) to "compute-0"
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7351] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7356] manager[0x55d9d0041000]: rfkill: Wi-Fi hardware radio set enabled
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7356] manager[0x55d9d0041000]: rfkill: WWAN hardware radio set enabled
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7375] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7383] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7383] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7384] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7384] manager: Networking is enabled by state file
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7387] settings: Loaded settings plugin: keyfile (internal)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7390] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7415] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7424] dhcp: init: Using DHCP client 'internal'
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7426] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7430] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7435] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7441] device (lo): Activation: starting connection 'lo' (9dbcb845-48af-44e7-aac2-9b1c27d04ec3)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7446] device (eth0): carrier: link connected
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7450] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7454] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7454] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7459] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7464] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7469] device (eth1): carrier: link connected
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7472] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7476] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a) (indicated)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7477] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7481] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7486] device (eth1): Activation: starting connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7491] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 20 18:57:41 compute-0 systemd[1]: Started Network Manager.
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7497] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7499] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7501] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7503] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7505] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7507] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7509] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7512] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7518] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7520] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7547] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7561] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7576] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7579] dhcp4 (eth0): state changed new lease, address=38.102.83.210
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7582] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7589] device (lo): Activation: successful, device activated.
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7602] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 20 18:57:41 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7673] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7681] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7684] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7689] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7693] device (eth1): Activation: successful, device activated.
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7762] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7765] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7770] manager: NetworkManager state is now CONNECTED_SITE
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7774] device (eth0): Activation: successful, device activated.
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7781] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 20 18:57:41 compute-0 NetworkManager[48913]: <info>  [1768935461.7784] manager: startup complete
Jan 20 18:57:41 compute-0 sudo[48896]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:41 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 20 18:57:42 compute-0 sudo[49122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgltsgbgkklspfficsshcgsggotxpuqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935461.959424-163-225332498366366/AnsiballZ_dnf.py'
Jan 20 18:57:42 compute-0 sudo[49122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:42 compute-0 python3.9[49124]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:57:51 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:57:52 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 18:57:52 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 18:57:52 compute-0 systemd[1]: Reloading.
Jan 20 18:57:52 compute-0 systemd-rc-local-generator[49178]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:57:52 compute-0 systemd-sysv-generator[49183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:57:52 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 18:57:53 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 18:57:53 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 18:57:53 compute-0 systemd[1]: run-rd12956085b6e45e4a80dd672b24344c0.service: Deactivated successfully.
Jan 20 18:57:53 compute-0 sudo[49122]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:53 compute-0 sudo[49582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahiwcoowrrmwkpgwecpkquhaggeluuio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935473.5450795-175-32416781317797/AnsiballZ_stat.py'
Jan 20 18:57:53 compute-0 sudo[49582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:53 compute-0 python3.9[49584]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:57:53 compute-0 sudo[49582]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:54 compute-0 sudo[49734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zazjyqffifduhxxllncsunhnvfygrrtx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935474.1450455-184-246370237170219/AnsiballZ_ini_file.py'
Jan 20 18:57:54 compute-0 sudo[49734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:54 compute-0 python3.9[49736]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:54 compute-0 sudo[49734]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:55 compute-0 sudo[49888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uywtutucqffpkjxqhyqvflrnaysflyet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935474.955407-194-164520506062397/AnsiballZ_ini_file.py'
Jan 20 18:57:55 compute-0 sudo[49888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:55 compute-0 python3.9[49890]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:55 compute-0 sudo[49888]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:55 compute-0 sudo[50040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnedpltxbbxjfmusnxvpctpueucjzszt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935475.521721-194-10274801944378/AnsiballZ_ini_file.py'
Jan 20 18:57:55 compute-0 sudo[50040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:55 compute-0 python3.9[50042]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:55 compute-0 sudo[50040]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:56 compute-0 sudo[50192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuxihririvtfyvixpytcjbgxoywaddgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935476.0940762-209-215425590135288/AnsiballZ_ini_file.py'
Jan 20 18:57:56 compute-0 sudo[50192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:56 compute-0 python3.9[50194]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:56 compute-0 sudo[50192]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:57 compute-0 sudo[50344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbknsbixxoiklhnxdclxydjsplhaenaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935476.6439583-209-74711713980184/AnsiballZ_ini_file.py'
Jan 20 18:57:57 compute-0 sudo[50344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:57 compute-0 python3.9[50346]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:57 compute-0 sudo[50344]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:57 compute-0 sudo[50496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tapezvtyzxwwauhkocsdrxkqrfjbpbne ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935477.4060614-224-79759771986923/AnsiballZ_stat.py'
Jan 20 18:57:57 compute-0 sudo[50496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:57 compute-0 python3.9[50498]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:57:57 compute-0 sudo[50496]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:58 compute-0 sudo[50620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyvrjcbukgsywizfszvejalepyqouklu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935477.4060614-224-79759771986923/AnsiballZ_copy.py'
Jan 20 18:57:58 compute-0 sudo[50620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:58 compute-0 python3.9[50622]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935477.4060614-224-79759771986923/.source _original_basename=.ow39ac49 follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:58 compute-0 sudo[50620]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:58 compute-0 sudo[50772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obytdogcrspwijaxxnpsliryombulvxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935478.7136185-239-221277591017625/AnsiballZ_file.py'
Jan 20 18:57:58 compute-0 sudo[50772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:59 compute-0 python3.9[50774]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:57:59 compute-0 sudo[50772]: pam_unix(sudo:session): session closed for user root
Jan 20 18:57:59 compute-0 sudo[50924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qenqeshsosfnqxtskbuqommyhjfczkze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935479.2871506-247-238162119982224/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 20 18:57:59 compute-0 sudo[50924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:57:59 compute-0 python3.9[50926]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 20 18:57:59 compute-0 sudo[50924]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:00 compute-0 sudo[51076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snyfttxmaxkuplxkobyleopdndfaiwxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935480.0144389-256-193377566422370/AnsiballZ_file.py'
Jan 20 18:58:00 compute-0 sudo[51076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:00 compute-0 python3.9[51078]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:00 compute-0 sudo[51076]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:01 compute-0 sudo[51228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idpyasucdocsixicxqpkgvjrmshbokyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935480.8320842-266-232387624690479/AnsiballZ_stat.py'
Jan 20 18:58:01 compute-0 sudo[51228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:01 compute-0 sudo[51228]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:01 compute-0 sudo[51351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfpullcxqikhoyasdwkpynpydihiidm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935480.8320842-266-232387624690479/AnsiballZ_copy.py'
Jan 20 18:58:01 compute-0 sudo[51351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:01 compute-0 sudo[51351]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:02 compute-0 sudo[51503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovgohoxwelmsnumeivbsqxsgjinxuggl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935482.0027466-281-207908969959768/AnsiballZ_slurp.py'
Jan 20 18:58:02 compute-0 sudo[51503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:02 compute-0 python3.9[51505]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 20 18:58:02 compute-0 sudo[51503]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:03 compute-0 sudo[51678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-geymnoshlexpesbcwjnaapbrwhjqnwas ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935482.796708-290-272436610895250/async_wrapper.py j188714558496 300 /home/zuul/.ansible/tmp/ansible-tmp-1768935482.796708-290-272436610895250/AnsiballZ_edpm_os_net_config.py _'
Jan 20 18:58:03 compute-0 sudo[51678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:03 compute-0 ansible-async_wrapper.py[51680]: Invoked with j188714558496 300 /home/zuul/.ansible/tmp/ansible-tmp-1768935482.796708-290-272436610895250/AnsiballZ_edpm_os_net_config.py _
Jan 20 18:58:03 compute-0 ansible-async_wrapper.py[51683]: Starting module and watcher
Jan 20 18:58:03 compute-0 ansible-async_wrapper.py[51683]: Start watching 51684 (300)
Jan 20 18:58:03 compute-0 ansible-async_wrapper.py[51684]: Start module (51684)
Jan 20 18:58:03 compute-0 ansible-async_wrapper.py[51680]: Return async_wrapper task started.
Jan 20 18:58:03 compute-0 sudo[51678]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:03 compute-0 python3.9[51685]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 20 18:58:04 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 20 18:58:04 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 20 18:58:04 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 20 18:58:04 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 20 18:58:04 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.3716] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.3734] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4191] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4192] audit: op="connection-add" uuid="b467c5ba-25b1-4fe0-a044-a98bd5a8ea8f" name="br-ex-br" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4207] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4208] audit: op="connection-add" uuid="a30c3806-19c1-4774-9144-062e5e999330" name="br-ex-port" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4217] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4218] audit: op="connection-add" uuid="d84fac66-13fa-47e8-89b3-8ce25616c31c" name="eth1-port" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4228] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4229] audit: op="connection-add" uuid="2c1ba911-ad09-4bdc-985f-0b695ec2a13b" name="vlan20-port" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4239] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4240] audit: op="connection-add" uuid="c2689e70-8772-4d0c-9ef0-e140a0a893c7" name="vlan21-port" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4250] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4251] audit: op="connection-add" uuid="92954c86-a800-48d3-86f9-70f1d9766cca" name="vlan22-port" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4261] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4263] audit: op="connection-add" uuid="ce17356c-fe12-4caa-a7c1-55f601f4690b" name="vlan23-port" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4280] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4293] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4295] audit: op="connection-add" uuid="7628cce7-0f52-4351-b287-3dcb42e8f166" name="br-ex-if" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4349] audit: op="connection-update" uuid="3f70ede9-7960-5c64-9771-a2eedfd4d85a" name="ci-private-network" args="ipv6.routes,ipv6.addr-gen-mode,ipv6.dns,ipv6.routing-rules,ipv6.addresses,ipv6.method,ipv4.routes,ipv4.dns,ipv4.routing-rules,ipv4.never-default,ipv4.addresses,ipv4.method,ovs-interface.type,connection.controller,connection.slave-type,connection.port-type,connection.timestamp,connection.master,ovs-external-ids.data" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4364] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4365] audit: op="connection-add" uuid="bb30e430-d451-4997-93f3-7de1908603e7" name="vlan20-if" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4379] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4381] audit: op="connection-add" uuid="3aaca0df-ed3f-42e3-a752-631aacaa7601" name="vlan21-if" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4395] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4396] audit: op="connection-add" uuid="e4b39470-b83b-4f06-bda5-6893e6ab1573" name="vlan22-if" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4412] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4413] audit: op="connection-add" uuid="a9cea091-39e4-4b02-9e33-016e2f8116e5" name="vlan23-if" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4426] audit: op="connection-delete" uuid="fd33b000-20d4-3dcd-9e30-523cad9af7fa" name="Wired connection 1" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4438] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4441] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4448] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4451] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (b467c5ba-25b1-4fe0-a044-a98bd5a8ea8f)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4451] audit: op="connection-activate" uuid="b467c5ba-25b1-4fe0-a044-a98bd5a8ea8f" name="br-ex-br" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4452] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4453] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Success
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4456] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4459] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (a30c3806-19c1-4774-9144-062e5e999330)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4460] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4461] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4464] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4467] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d84fac66-13fa-47e8-89b3-8ce25616c31c)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4468] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4468] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4472] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4475] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (2c1ba911-ad09-4bdc-985f-0b695ec2a13b)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4476] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4477] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4480] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4483] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c2689e70-8772-4d0c-9ef0-e140a0a893c7)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4484] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4484] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4488] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4490] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (92954c86-a800-48d3-86f9-70f1d9766cca)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4491] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4492] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4496] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4499] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (ce17356c-fe12-4caa-a7c1-55f601f4690b)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4500] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4501] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4503] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4507] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4508] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4510] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4513] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (7628cce7-0f52-4351-b287-3dcb42e8f166)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4514] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4516] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4517] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4518] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4518] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4525] device (eth1): disconnecting for new activation request.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4526] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4528] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4529] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4530] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4532] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4532] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4534] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4537] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (bb30e430-d451-4997-93f3-7de1908603e7)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4537] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4539] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4541] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4542] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4544] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4544] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4546] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4549] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (3aaca0df-ed3f-42e3-a752-631aacaa7601)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4549] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4551] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4552] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4553] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4555] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4556] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4558] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4561] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e4b39470-b83b-4f06-bda5-6893e6ab1573)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4562] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4564] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4566] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4566] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4568] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <warn>  [1768935485.4569] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4571] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4574] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (a9cea091-39e4-4b02-9e33-016e2f8116e5)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4575] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4576] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4578] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4578] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4580] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4589] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,802-3-ethernet.mtu,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4591] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4593] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4595] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4600] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4603] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4606] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4608] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4609] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4612] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4616] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4618] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4620] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4633] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4637] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4639] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4641] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4645] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4648] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4650] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4652] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4656] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4659] dhcp4 (eth0): canceled DHCP transaction
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4660] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4660] dhcp4 (eth0): state changed no lease
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4661] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 20 18:58:05 compute-0 systemd-udevd[51690]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:58:05 compute-0 kernel: Timeout policy base is empty
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4670] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4673] audit: op="device-reapply" interface="eth1" ifindex=3 pid=51686 uid=0 result="fail" reason="Device is not activated"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4706] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4709] dhcp4 (eth0): state changed new lease, address=38.102.83.210
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4745] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4750] device (eth1): disconnecting for new activation request.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4751] audit: op="connection-activate" uuid="3f70ede9-7960-5c64-9771-a2eedfd4d85a" name="ci-private-network" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4752] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4756] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4783] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 18:58:05 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.4912] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 20 18:58:05 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 20 18:58:05 compute-0 kernel: br-ex: entered promiscuous mode
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5262] device (eth1): Activation: starting connection 'ci-private-network' (3f70ede9-7960-5c64-9771-a2eedfd4d85a)
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5268] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5279] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5283] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5289] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5293] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5304] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5306] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5307] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5308] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5310] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5311] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5328] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5337] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5340] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5344] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5348] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5354] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5358] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5363] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5367] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5372] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5376] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5380] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5384] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5392] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5398] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5406] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5414] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5422] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 kernel: vlan22: entered promiscuous mode
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5428] device (eth1): Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5443] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5478] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5480] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 kernel: vlan21: entered promiscuous mode
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5484] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 systemd-udevd[51691]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5518] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5532] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5546] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5548] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5553] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 kernel: vlan23: entered promiscuous mode
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5601] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5615] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 kernel: vlan20: entered promiscuous mode
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5634] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5637] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5641] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5731] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5743] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5754] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5771] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5781] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5782] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5789] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5798] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5799] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 20 18:58:05 compute-0 NetworkManager[48913]: <info>  [1768935485.5804] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 20 18:58:06 compute-0 NetworkManager[48913]: <info>  [1768935486.7013] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 18:58:06 compute-0 NetworkManager[48913]: <info>  [1768935486.8326] checkpoint[0x55d9d0016950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 20 18:58:06 compute-0 NetworkManager[48913]: <info>  [1768935486.8328] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=51686 uid=0 result="success"
Jan 20 18:58:07 compute-0 NetworkManager[48913]: <info>  [1768935487.0752] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51686 uid=0 result="success"
Jan 20 18:58:07 compute-0 NetworkManager[48913]: <info>  [1768935487.0760] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51686 uid=0 result="success"
Jan 20 18:58:07 compute-0 sudo[52043]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsllzwqkgtsjbtvenudocxadnwbeewaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935486.7365575-290-200794452435089/AnsiballZ_async_status.py'
Jan 20 18:58:07 compute-0 sudo[52043]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:07 compute-0 NetworkManager[48913]: <info>  [1768935487.2510] audit: op="networking-control" arg="global-dns-configuration" pid=51686 uid=0 result="success"
Jan 20 18:58:07 compute-0 NetworkManager[48913]: <info>  [1768935487.2541] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 20 18:58:07 compute-0 NetworkManager[48913]: <info>  [1768935487.2570] audit: op="networking-control" arg="global-dns-configuration" pid=51686 uid=0 result="success"
Jan 20 18:58:07 compute-0 NetworkManager[48913]: <info>  [1768935487.2589] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51686 uid=0 result="success"
Jan 20 18:58:07 compute-0 python3.9[52045]: ansible-ansible.legacy.async_status Invoked with jid=j188714558496.51680 mode=status _async_dir=/root/.ansible_async
Jan 20 18:58:07 compute-0 sudo[52043]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:07 compute-0 NetworkManager[48913]: <info>  [1768935487.3783] checkpoint[0x55d9d0016a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 20 18:58:07 compute-0 NetworkManager[48913]: <info>  [1768935487.3788] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=51686 uid=0 result="success"
Jan 20 18:58:07 compute-0 ansible-async_wrapper.py[51684]: Module complete (51684)
Jan 20 18:58:08 compute-0 ansible-async_wrapper.py[51683]: Done in kid B.
Jan 20 18:58:10 compute-0 sudo[52147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mraaxkmkmgrxbnakzckmvjreubgithrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935486.7365575-290-200794452435089/AnsiballZ_async_status.py'
Jan 20 18:58:10 compute-0 sudo[52147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:10 compute-0 python3.9[52149]: ansible-ansible.legacy.async_status Invoked with jid=j188714558496.51680 mode=status _async_dir=/root/.ansible_async
Jan 20 18:58:10 compute-0 sudo[52147]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:11 compute-0 sudo[52247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yobfubkomxfhcrvbtmwaboildcbqqbgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935486.7365575-290-200794452435089/AnsiballZ_async_status.py'
Jan 20 18:58:11 compute-0 sudo[52247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:11 compute-0 python3.9[52249]: ansible-ansible.legacy.async_status Invoked with jid=j188714558496.51680 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 18:58:11 compute-0 sudo[52247]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:11 compute-0 sudo[52399]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejgmmmivxprzmtqdnaiczyawgjudcgop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935491.5024483-317-195957931786002/AnsiballZ_stat.py'
Jan 20 18:58:11 compute-0 sudo[52399]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:11 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 20 18:58:11 compute-0 python3.9[52401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:11 compute-0 sudo[52399]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:12 compute-0 sudo[52524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sizrvoszamqujtidcknsziwuoalpodhg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935491.5024483-317-195957931786002/AnsiballZ_copy.py'
Jan 20 18:58:12 compute-0 sudo[52524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:12 compute-0 python3.9[52526]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935491.5024483-317-195957931786002/.source.returncode _original_basename=.h4a5dbis follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:12 compute-0 sudo[52524]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:12 compute-0 sudo[52678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvzzgardvsnmpoybtjhffaoxtucxtoap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935492.615944-333-144128445078362/AnsiballZ_stat.py'
Jan 20 18:58:12 compute-0 sudo[52678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:12 compute-0 sshd-session[52527]: Invalid user solana from 45.148.10.240 port 49940
Jan 20 18:58:13 compute-0 python3.9[52680]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:13 compute-0 sudo[52678]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:13 compute-0 sshd-session[52527]: Connection closed by invalid user solana 45.148.10.240 port 49940 [preauth]
Jan 20 18:58:13 compute-0 sudo[52801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qypxowaorgdctkpfgibhfqtiawqjmwih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935492.615944-333-144128445078362/AnsiballZ_copy.py'
Jan 20 18:58:13 compute-0 sudo[52801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:13 compute-0 python3.9[52803]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935492.615944-333-144128445078362/.source.cfg _original_basename=.m__ydcdp follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:13 compute-0 sudo[52801]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:13 compute-0 sudo[52953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnfmsrualcdrstogcodebhuotleswdxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935493.6643202-348-168471136292270/AnsiballZ_systemd.py'
Jan 20 18:58:13 compute-0 sudo[52953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:14 compute-0 python3.9[52955]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:58:14 compute-0 systemd[1]: Reloading Network Manager...
Jan 20 18:58:14 compute-0 NetworkManager[48913]: <info>  [1768935494.2396] audit: op="reload" arg="0" pid=52960 uid=0 result="success"
Jan 20 18:58:14 compute-0 NetworkManager[48913]: <info>  [1768935494.2406] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 20 18:58:14 compute-0 systemd[1]: Reloaded Network Manager.
Jan 20 18:58:14 compute-0 sudo[52953]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:14 compute-0 sshd-session[44912]: Connection closed by 192.168.122.30 port 51954
Jan 20 18:58:14 compute-0 sshd-session[44909]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:58:14 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 20 18:58:14 compute-0 systemd[1]: session-10.scope: Consumed 50.193s CPU time.
Jan 20 18:58:14 compute-0 systemd-logind[797]: Session 10 logged out. Waiting for processes to exit.
Jan 20 18:58:14 compute-0 systemd-logind[797]: Removed session 10.
Jan 20 18:58:21 compute-0 sshd-session[52991]: Accepted publickey for zuul from 192.168.122.30 port 45598 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 18:58:21 compute-0 systemd-logind[797]: New session 11 of user zuul.
Jan 20 18:58:21 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 20 18:58:21 compute-0 sshd-session[52991]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:58:22 compute-0 python3.9[53144]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:58:23 compute-0 python3.9[53298]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:58:24 compute-0 python3.9[53492]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:24 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 20 18:58:24 compute-0 sshd-session[52994]: Connection closed by 192.168.122.30 port 45598
Jan 20 18:58:24 compute-0 sshd-session[52991]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:58:24 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 20 18:58:24 compute-0 systemd[1]: session-11.scope: Consumed 2.167s CPU time.
Jan 20 18:58:24 compute-0 systemd-logind[797]: Session 11 logged out. Waiting for processes to exit.
Jan 20 18:58:24 compute-0 systemd-logind[797]: Removed session 11.
Jan 20 18:58:30 compute-0 sshd-session[53521]: Accepted publickey for zuul from 192.168.122.30 port 60020 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 18:58:30 compute-0 systemd-logind[797]: New session 12 of user zuul.
Jan 20 18:58:30 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 20 18:58:30 compute-0 sshd-session[53521]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:58:31 compute-0 python3.9[53674]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:58:32 compute-0 python3.9[53828]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:58:32 compute-0 sudo[53982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kuyqmiprojxqmqovmrwqzxzryozeqehk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935512.5443459-35-49914106477401/AnsiballZ_setup.py'
Jan 20 18:58:32 compute-0 sudo[53982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:33 compute-0 python3.9[53984]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:58:33 compute-0 sudo[53982]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:33 compute-0 sudo[54067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqinwyoophbygaejmvqesuppgpdmcxsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935512.5443459-35-49914106477401/AnsiballZ_dnf.py'
Jan 20 18:58:33 compute-0 sudo[54067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:33 compute-0 python3.9[54069]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:58:35 compute-0 sudo[54067]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:35 compute-0 sudo[54220]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erzbosnmjskvkwcxccemtezjnepcqlfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935515.6288784-47-185840228038012/AnsiballZ_setup.py'
Jan 20 18:58:35 compute-0 sudo[54220]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:36 compute-0 python3.9[54222]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:58:36 compute-0 sudo[54220]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:36 compute-0 sudo[54416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkgnezwzeghppnmumkimqodvnlqzsnaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935516.5990357-58-270343127753833/AnsiballZ_file.py'
Jan 20 18:58:36 compute-0 sudo[54416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:37 compute-0 python3.9[54418]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:37 compute-0 sudo[54416]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:37 compute-0 sudo[54570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vdizmmdhavmmsgxtvfksgscokfzorkco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935517.3575535-66-122836940721183/AnsiballZ_command.py'
Jan 20 18:58:37 compute-0 sudo[54570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:37 compute-0 python3.9[54572]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:37 compute-0 sshd-session[54466]: Invalid user admin from 2.57.121.112 port 17263
Jan 20 18:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat3898173099-merged.mount: Deactivated successfully.
Jan 20 18:58:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-metacopy\x2dcheck1129975980-merged.mount: Deactivated successfully.
Jan 20 18:58:38 compute-0 podman[54573]: 2026-01-20 18:58:38.027319029 +0000 UTC m=+0.052505213 system refresh
Jan 20 18:58:38 compute-0 sudo[54570]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:38 compute-0 sshd-session[54466]: Received disconnect from 2.57.121.112 port 17263:11: Bye [preauth]
Jan 20 18:58:38 compute-0 sshd-session[54466]: Disconnected from invalid user admin 2.57.121.112 port 17263 [preauth]
Jan 20 18:58:38 compute-0 sudo[54733]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehvnlzeesdxrivevydtnefhtoqmyybq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935518.1998527-74-174261014931238/AnsiballZ_stat.py'
Jan 20 18:58:38 compute-0 sudo[54733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:38 compute-0 python3.9[54735]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:38 compute-0 sudo[54733]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:39 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 18:58:39 compute-0 sudo[54856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtotkccuptoqjyyzezmheclyxiaqeddq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935518.1998527-74-174261014931238/AnsiballZ_copy.py'
Jan 20 18:58:39 compute-0 sudo[54856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:39 compute-0 python3.9[54858]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935518.1998527-74-174261014931238/.source.json follow=False _original_basename=podman_network_config.j2 checksum=db153e063bda690dbde9b625a14eb97c349f5d6f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:39 compute-0 sudo[54856]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:39 compute-0 sudo[55008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjyhjiujiyyvvgqtpbujkjlykcehnxfh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935519.6916678-89-168550325786552/AnsiballZ_stat.py'
Jan 20 18:58:39 compute-0 sudo[55008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:40 compute-0 python3.9[55010]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:40 compute-0 sudo[55008]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:40 compute-0 sudo[55131]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-redjuisqcdeyxrdrpwcxasexztfaqwjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935519.6916678-89-168550325786552/AnsiballZ_copy.py'
Jan 20 18:58:40 compute-0 sudo[55131]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:40 compute-0 python3.9[55133]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935519.6916678-89-168550325786552/.source.conf follow=False _original_basename=registries.conf.j2 checksum=231117e605c41d48bc567c0404cb51471711010a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:58:40 compute-0 sudo[55131]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:41 compute-0 sudo[55283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpbrtflbwfkctpefsfycvoksslojrsdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935520.7603717-105-166054099563035/AnsiballZ_ini_file.py'
Jan 20 18:58:41 compute-0 sudo[55283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:41 compute-0 python3.9[55285]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:58:41 compute-0 sudo[55283]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:41 compute-0 sudo[55435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fasvnjpbntfqkvvcysdlvwlmzknwujyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935521.4969635-105-108331571179344/AnsiballZ_ini_file.py'
Jan 20 18:58:41 compute-0 sudo[55435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:41 compute-0 python3.9[55437]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:58:41 compute-0 sudo[55435]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:42 compute-0 sudo[55587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkdktbqeaspdbqwpljuotiqqfmaayght ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935522.0592492-105-174247365730407/AnsiballZ_ini_file.py'
Jan 20 18:58:42 compute-0 sudo[55587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:42 compute-0 python3.9[55589]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:58:42 compute-0 sudo[55587]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:42 compute-0 sudo[55739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-miuffobywwhxgcfofayqbjhwlitlrvzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935522.629232-105-216354760213054/AnsiballZ_ini_file.py'
Jan 20 18:58:42 compute-0 sudo[55739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:43 compute-0 python3.9[55741]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:58:43 compute-0 sudo[55739]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:43 compute-0 sudo[55891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfslmtcucvkhcyftcgrfbvjdcphreuqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935523.2765605-136-33502397908987/AnsiballZ_dnf.py'
Jan 20 18:58:43 compute-0 sudo[55891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:43 compute-0 python3.9[55893]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:58:44 compute-0 sudo[55891]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:45 compute-0 sudo[56044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhveitunwugxzcvojnxuwspoduqzqobm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935525.3079832-147-101697368132914/AnsiballZ_setup.py'
Jan 20 18:58:45 compute-0 sudo[56044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:45 compute-0 python3.9[56046]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:58:45 compute-0 sudo[56044]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:46 compute-0 sudo[56198]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsrnwdsobslrzcvggxrlqurfeeycrvce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935525.9966161-155-239773720455953/AnsiballZ_stat.py'
Jan 20 18:58:46 compute-0 sudo[56198]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:46 compute-0 python3.9[56200]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:58:46 compute-0 sudo[56198]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:46 compute-0 sudo[56350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjchmgqpsnpxhpnrccuztticwfsazduz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935526.6543534-164-215763629794420/AnsiballZ_stat.py'
Jan 20 18:58:46 compute-0 sudo[56350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:47 compute-0 python3.9[56352]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 18:58:47 compute-0 sudo[56350]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:47 compute-0 sudo[56502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slkvcqdsokmkqsydwwwxqcubzxojrkfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935527.4178152-174-258294553847838/AnsiballZ_command.py'
Jan 20 18:58:47 compute-0 sudo[56502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:47 compute-0 python3.9[56504]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:58:47 compute-0 sudo[56502]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:48 compute-0 sudo[56655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wburcgfgzgjcvervuuksrwgegzytvlcg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935528.093809-184-104849681502334/AnsiballZ_service_facts.py'
Jan 20 18:58:48 compute-0 sudo[56655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:48 compute-0 python3.9[56657]: ansible-service_facts Invoked
Jan 20 18:58:48 compute-0 network[56674]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:58:48 compute-0 network[56675]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:58:48 compute-0 network[56676]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:58:52 compute-0 sudo[56655]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:53 compute-0 sudo[56959]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiwhntlvwtflxarrtiliucguxcdobeoi ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768935533.1997905-199-5261410836165/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768935533.1997905-199-5261410836165/args'
Jan 20 18:58:53 compute-0 sudo[56959]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:53 compute-0 sudo[56959]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:54 compute-0 sudo[57126]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrfwdrsdpelrwofdmkumhwueztzohjog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935533.774091-210-81670820197776/AnsiballZ_dnf.py'
Jan 20 18:58:54 compute-0 sudo[57126]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:54 compute-0 python3.9[57128]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 18:58:55 compute-0 sudo[57126]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:56 compute-0 sudo[57279]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwtloirrviysuegnqsegeflptblwpnrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935535.8390436-223-245189397801824/AnsiballZ_package_facts.py'
Jan 20 18:58:56 compute-0 sudo[57279]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:56 compute-0 python3.9[57281]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 18:58:56 compute-0 sudo[57279]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:57 compute-0 sudo[57431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nitpwypyzworqpnndawfgjmpxykvllzr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935537.3010283-233-10049290404278/AnsiballZ_stat.py'
Jan 20 18:58:57 compute-0 sudo[57431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:57 compute-0 python3.9[57433]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:57 compute-0 sudo[57431]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:58 compute-0 sudo[57556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbodpmiapybjvhdcbsisxnpyqkilacsa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935537.3010283-233-10049290404278/AnsiballZ_copy.py'
Jan 20 18:58:58 compute-0 sudo[57556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:58 compute-0 python3.9[57558]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935537.3010283-233-10049290404278/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:58 compute-0 sudo[57556]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:58 compute-0 sudo[57710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdhdmqxokmzoabiutdcohpzvsaoyaeza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935538.6792006-248-187676939680742/AnsiballZ_stat.py'
Jan 20 18:58:58 compute-0 sudo[57710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:59 compute-0 python3.9[57712]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:58:59 compute-0 sudo[57710]: pam_unix(sudo:session): session closed for user root
Jan 20 18:58:59 compute-0 sudo[57835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzmkwzrypjevkvshmgwngqjsgfffadrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935538.6792006-248-187676939680742/AnsiballZ_copy.py'
Jan 20 18:58:59 compute-0 sudo[57835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:58:59 compute-0 python3.9[57837]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935538.6792006-248-187676939680742/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:58:59 compute-0 sudo[57835]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:00 compute-0 sudo[57989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxbftdrjcmkvcmdxasqdormizplltlck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935540.19059-269-64159650080227/AnsiballZ_lineinfile.py'
Jan 20 18:59:00 compute-0 sudo[57989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:00 compute-0 python3.9[57991]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:00 compute-0 sudo[57989]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:01 compute-0 sudo[58143]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iiultymmwvqsvxxkplytamznthtssjmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935541.4033399-284-32854195234920/AnsiballZ_setup.py'
Jan 20 18:59:01 compute-0 sudo[58143]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:01 compute-0 python3.9[58145]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:59:02 compute-0 sudo[58143]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:02 compute-0 sudo[58227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksfclkmkwhvzwqtmfksntwjzruugvnni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935541.4033399-284-32854195234920/AnsiballZ_systemd.py'
Jan 20 18:59:02 compute-0 sudo[58227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:02 compute-0 python3.9[58229]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:03 compute-0 sudo[58227]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:03 compute-0 sudo[58381]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehztcmwqxdatgjfjluaphhvekihkicli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935543.4407356-300-265316931523534/AnsiballZ_setup.py'
Jan 20 18:59:03 compute-0 sudo[58381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:03 compute-0 python3.9[58383]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 18:59:04 compute-0 sudo[58381]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:04 compute-0 sudo[58465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfylcacctfqjavxqezzpthukdjyydxdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935543.4407356-300-265316931523534/AnsiballZ_systemd.py'
Jan 20 18:59:04 compute-0 sudo[58465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:04 compute-0 python3.9[58467]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:59:04 compute-0 chronyd[784]: chronyd exiting
Jan 20 18:59:04 compute-0 systemd[1]: Stopping NTP client/server...
Jan 20 18:59:04 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 20 18:59:04 compute-0 systemd[1]: Stopped NTP client/server.
Jan 20 18:59:04 compute-0 systemd[1]: Starting NTP client/server...
Jan 20 18:59:04 compute-0 chronyd[58476]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 20 18:59:04 compute-0 chronyd[58476]: Frequency -23.108 +/- 0.486 ppm read from /var/lib/chrony/drift
Jan 20 18:59:04 compute-0 chronyd[58476]: Loaded seccomp filter (level 2)
Jan 20 18:59:04 compute-0 systemd[1]: Started NTP client/server.
Jan 20 18:59:04 compute-0 sudo[58465]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:05 compute-0 sshd-session[53524]: Connection closed by 192.168.122.30 port 60020
Jan 20 18:59:05 compute-0 sshd-session[53521]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:59:05 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 20 18:59:05 compute-0 systemd[1]: session-12.scope: Consumed 24.653s CPU time.
Jan 20 18:59:05 compute-0 systemd-logind[797]: Session 12 logged out. Waiting for processes to exit.
Jan 20 18:59:05 compute-0 systemd-logind[797]: Removed session 12.
Jan 20 18:59:10 compute-0 sshd-session[58502]: Accepted publickey for zuul from 192.168.122.30 port 48660 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 18:59:10 compute-0 systemd-logind[797]: New session 13 of user zuul.
Jan 20 18:59:10 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 20 18:59:10 compute-0 sshd-session[58502]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:59:10 compute-0 sudo[58655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjfwoaadtkaenpggnomtsracwapbwamc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935550.5235288-17-152618036960952/AnsiballZ_file.py'
Jan 20 18:59:10 compute-0 sudo[58655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:11 compute-0 python3.9[58657]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:11 compute-0 sudo[58655]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:11 compute-0 sudo[58807]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgteathvunmzwszgoyarzlprlaumbaqn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935551.3144825-29-142132317846135/AnsiballZ_stat.py'
Jan 20 18:59:11 compute-0 sudo[58807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:11 compute-0 python3.9[58809]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:11 compute-0 sudo[58807]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:12 compute-0 sudo[58930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axfdqswnvlpfzzygudddwwduhwsaxgrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935551.3144825-29-142132317846135/AnsiballZ_copy.py'
Jan 20 18:59:12 compute-0 sudo[58930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:12 compute-0 python3.9[58932]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935551.3144825-29-142132317846135/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:12 compute-0 sudo[58930]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:12 compute-0 sshd-session[58505]: Connection closed by 192.168.122.30 port 48660
Jan 20 18:59:12 compute-0 sshd-session[58502]: pam_unix(sshd:session): session closed for user zuul
Jan 20 18:59:12 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 20 18:59:12 compute-0 systemd[1]: session-13.scope: Consumed 1.531s CPU time.
Jan 20 18:59:12 compute-0 systemd-logind[797]: Session 13 logged out. Waiting for processes to exit.
Jan 20 18:59:12 compute-0 systemd-logind[797]: Removed session 13.
Jan 20 18:59:18 compute-0 sshd-session[58957]: Accepted publickey for zuul from 192.168.122.30 port 59172 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 18:59:18 compute-0 systemd-logind[797]: New session 14 of user zuul.
Jan 20 18:59:18 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 20 18:59:18 compute-0 sshd-session[58957]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 18:59:19 compute-0 python3.9[59110]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 18:59:20 compute-0 sudo[59264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxvhtjyytmdvutibvcuimnrngfxjobgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935560.3626854-28-161223414142026/AnsiballZ_file.py'
Jan 20 18:59:20 compute-0 sudo[59264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:21 compute-0 python3.9[59266]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:21 compute-0 sudo[59264]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:22 compute-0 sudo[59439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrqcmxcujvhjlcllmjsfdbudiesxiooe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935561.4677958-36-64825458555269/AnsiballZ_stat.py'
Jan 20 18:59:22 compute-0 sudo[59439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:22 compute-0 python3.9[59441]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:22 compute-0 sudo[59439]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:22 compute-0 sudo[59562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmnvalnrcecwrdcoruarnfnetfiwyrsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935561.4677958-36-64825458555269/AnsiballZ_copy.py'
Jan 20 18:59:22 compute-0 sudo[59562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:22 compute-0 python3.9[59564]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1768935561.4677958-36-64825458555269/.source.json _original_basename=.zthxwr2y follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:22 compute-0 sudo[59562]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:23 compute-0 sudo[59714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aouiaynjsuxltoxksqwpieffasrvptul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935563.234755-59-85937451567981/AnsiballZ_stat.py'
Jan 20 18:59:23 compute-0 sudo[59714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:23 compute-0 python3.9[59716]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:23 compute-0 sudo[59714]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:24 compute-0 sudo[59837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isoeytyhwckwzlpbnqvblbaeyujvudol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935563.234755-59-85937451567981/AnsiballZ_copy.py'
Jan 20 18:59:24 compute-0 sudo[59837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:24 compute-0 python3.9[59839]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935563.234755-59-85937451567981/.source _original_basename=.bmejx97j follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:24 compute-0 sudo[59837]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:24 compute-0 sudo[59989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwpaddhedfbyslcfomzdfcyouorloxjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935564.424538-75-262734076206360/AnsiballZ_file.py'
Jan 20 18:59:24 compute-0 sudo[59989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:24 compute-0 python3.9[59991]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:59:24 compute-0 sudo[59989]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:25 compute-0 sudo[60141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkipiwpfpcgylwcltpyfkayhssdrxqvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935565.1422527-83-156714611444195/AnsiballZ_stat.py'
Jan 20 18:59:25 compute-0 sudo[60141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:25 compute-0 python3.9[60143]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:25 compute-0 sudo[60141]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:25 compute-0 sudo[60264]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leyplmrmquhdmvysrmbdcuzfzyfpoobl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935565.1422527-83-156714611444195/AnsiballZ_copy.py'
Jan 20 18:59:25 compute-0 sudo[60264]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:26 compute-0 python3.9[60266]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935565.1422527-83-156714611444195/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:59:26 compute-0 sudo[60264]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:26 compute-0 sudo[60416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltkvcdqvzcroilpoayxmhmznvmrormgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935566.2098558-83-13006947650402/AnsiballZ_stat.py'
Jan 20 18:59:26 compute-0 sudo[60416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:26 compute-0 python3.9[60418]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:26 compute-0 sudo[60416]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:27 compute-0 sudo[60539]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dshzptkpqacgckplfxftszivpaisrqnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935566.2098558-83-13006947650402/AnsiballZ_copy.py'
Jan 20 18:59:27 compute-0 sudo[60539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:27 compute-0 python3.9[60541]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768935566.2098558-83-13006947650402/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 18:59:27 compute-0 sudo[60539]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:27 compute-0 sudo[60691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aelqjwwrvnbrptftiybmourlfohhvpdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935567.4776747-112-74516772845128/AnsiballZ_file.py'
Jan 20 18:59:27 compute-0 sudo[60691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:28 compute-0 python3.9[60693]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:28 compute-0 sudo[60691]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:28 compute-0 sudo[60843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epcqshmevdnvnnynlvxmthczlaycippa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935568.2058494-120-113888982139769/AnsiballZ_stat.py'
Jan 20 18:59:28 compute-0 sudo[60843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:28 compute-0 python3.9[60845]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:28 compute-0 sudo[60843]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:29 compute-0 sudo[60966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjqrmljomuadgdxcovkuboosfgofnmao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935568.2058494-120-113888982139769/AnsiballZ_copy.py'
Jan 20 18:59:29 compute-0 sudo[60966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:29 compute-0 python3.9[60968]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935568.2058494-120-113888982139769/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:29 compute-0 sudo[60966]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:29 compute-0 sudo[61118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxmkgpylddjxnkbqvjfiixjghbtniiuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935569.396869-135-146289972984354/AnsiballZ_stat.py'
Jan 20 18:59:29 compute-0 sudo[61118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:29 compute-0 python3.9[61120]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:29 compute-0 sudo[61118]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:30 compute-0 sudo[61241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huslszmndpvccmltryescptrodtjmibt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935569.396869-135-146289972984354/AnsiballZ_copy.py'
Jan 20 18:59:30 compute-0 sudo[61241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:30 compute-0 python3.9[61243]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935569.396869-135-146289972984354/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:30 compute-0 sudo[61241]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:31 compute-0 sudo[61393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oplfkcwyffispmcqaqfjbwhqcbpthkrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935570.5189974-150-116766181751464/AnsiballZ_systemd.py'
Jan 20 18:59:31 compute-0 sudo[61393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:31 compute-0 python3.9[61395]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:31 compute-0 systemd[1]: Reloading.
Jan 20 18:59:31 compute-0 systemd-rc-local-generator[61422]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:31 compute-0 systemd-sysv-generator[61426]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:31 compute-0 systemd[1]: Reloading.
Jan 20 18:59:31 compute-0 systemd-rc-local-generator[61458]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:31 compute-0 systemd-sysv-generator[61462]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:31 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 20 18:59:31 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 20 18:59:31 compute-0 sudo[61393]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:32 compute-0 sudo[61620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ofbqfmouioyuaxgrdfgmzlmtybxaqjim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935572.0418088-158-244583400376586/AnsiballZ_stat.py'
Jan 20 18:59:32 compute-0 sudo[61620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:32 compute-0 python3.9[61622]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:32 compute-0 sudo[61620]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:32 compute-0 sudo[61743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdcvubthhoxvacvltnixaokcwtztzwjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935572.0418088-158-244583400376586/AnsiballZ_copy.py'
Jan 20 18:59:32 compute-0 sudo[61743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:33 compute-0 python3.9[61745]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935572.0418088-158-244583400376586/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:33 compute-0 sudo[61743]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:33 compute-0 sudo[61895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkgakxqlqthhyiytadmrtjoxufxunyme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935573.2934673-173-214388186955370/AnsiballZ_stat.py'
Jan 20 18:59:33 compute-0 sudo[61895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:33 compute-0 python3.9[61897]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:33 compute-0 sudo[61895]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:34 compute-0 sudo[62018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdcvmbnjqsvkxuxvlfhrypnrlvjecdrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935573.2934673-173-214388186955370/AnsiballZ_copy.py'
Jan 20 18:59:34 compute-0 sudo[62018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:34 compute-0 python3.9[62020]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935573.2934673-173-214388186955370/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:34 compute-0 sudo[62018]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:34 compute-0 sudo[62170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpklayrhvxgdfnfifcfepwecjmeurztm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935574.414772-188-243669435769173/AnsiballZ_systemd.py'
Jan 20 18:59:34 compute-0 sudo[62170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:34 compute-0 python3.9[62172]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:35 compute-0 systemd[1]: Reloading.
Jan 20 18:59:35 compute-0 systemd-rc-local-generator[62197]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:35 compute-0 systemd-sysv-generator[62200]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:35 compute-0 systemd[1]: Reloading.
Jan 20 18:59:35 compute-0 systemd-rc-local-generator[62236]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:35 compute-0 systemd-sysv-generator[62240]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:35 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 18:59:35 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 18:59:35 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 18:59:35 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 18:59:35 compute-0 sudo[62170]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:36 compute-0 python3.9[62398]: ansible-ansible.builtin.service_facts Invoked
Jan 20 18:59:36 compute-0 network[62415]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 18:59:36 compute-0 network[62416]: 'network-scripts' will be removed from distribution in near future.
Jan 20 18:59:36 compute-0 network[62417]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 18:59:39 compute-0 sudo[62677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-leeqjwtnhsnbwzcchbnjhgaqvmmpiigw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935578.943308-204-52168774454119/AnsiballZ_systemd.py'
Jan 20 18:59:39 compute-0 sudo[62677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:39 compute-0 python3.9[62679]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:39 compute-0 systemd[1]: Reloading.
Jan 20 18:59:39 compute-0 systemd-sysv-generator[62714]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:39 compute-0 systemd-rc-local-generator[62710]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:39 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 20 18:59:40 compute-0 iptables.init[62720]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 20 18:59:40 compute-0 iptables.init[62720]: iptables: Flushing firewall rules: [  OK  ]
Jan 20 18:59:40 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 20 18:59:40 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 20 18:59:40 compute-0 sudo[62677]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:40 compute-0 sudo[62914]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bakeghilakaxyoqigkpmsqxtfodtpwbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935580.2604415-204-242944484087515/AnsiballZ_systemd.py'
Jan 20 18:59:40 compute-0 sudo[62914]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:40 compute-0 python3.9[62916]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:40 compute-0 sudo[62914]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:41 compute-0 sudo[63068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhhsnnwiqaxmigotlftfvlgoyqvgvonn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935581.0996728-220-40510105882252/AnsiballZ_systemd.py'
Jan 20 18:59:41 compute-0 sudo[63068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:41 compute-0 python3.9[63070]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 18:59:41 compute-0 systemd[1]: Reloading.
Jan 20 18:59:41 compute-0 systemd-rc-local-generator[63100]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 18:59:41 compute-0 systemd-sysv-generator[63103]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 18:59:41 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 20 18:59:41 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 20 18:59:41 compute-0 sudo[63068]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:42 compute-0 sudo[63260]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czzgvshcnywlpcjeeldieanpbyerccsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935582.117362-228-26756019653986/AnsiballZ_command.py'
Jan 20 18:59:42 compute-0 sudo[63260]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:42 compute-0 python3.9[63262]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:59:42 compute-0 sudo[63260]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:43 compute-0 sudo[63413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owprltidudkeugwmaiqcfcvudxqbndja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935583.1455934-242-207465435754999/AnsiballZ_stat.py'
Jan 20 18:59:43 compute-0 sudo[63413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:43 compute-0 python3.9[63415]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:43 compute-0 sudo[63413]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:43 compute-0 sudo[63538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoaspfqptenzdxsthsxbqxujrktvdbwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935583.1455934-242-207465435754999/AnsiballZ_copy.py'
Jan 20 18:59:43 compute-0 sudo[63538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:44 compute-0 python3.9[63540]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935583.1455934-242-207465435754999/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:44 compute-0 sudo[63538]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:44 compute-0 sudo[63691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgylyrujoiepnrozrfuqwvaijviaobwc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935584.2885923-257-176707101697545/AnsiballZ_systemd.py'
Jan 20 18:59:44 compute-0 sudo[63691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:44 compute-0 python3.9[63693]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 18:59:44 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 20 18:59:44 compute-0 sshd[1008]: Received SIGHUP; restarting.
Jan 20 18:59:44 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 20 18:59:44 compute-0 sshd[1008]: Server listening on 0.0.0.0 port 22.
Jan 20 18:59:44 compute-0 sshd[1008]: Server listening on :: port 22.
Jan 20 18:59:45 compute-0 sudo[63691]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:45 compute-0 sudo[63847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-meievepwkjfgnmriuvpuwyhfxwitsgps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935585.1586673-265-107970466298327/AnsiballZ_file.py'
Jan 20 18:59:45 compute-0 sudo[63847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:45 compute-0 python3.9[63849]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:45 compute-0 sudo[63847]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:46 compute-0 sudo[63999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnilbddamybmugbxvflgqpdcjikgpvlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935585.778155-273-70952657120155/AnsiballZ_stat.py'
Jan 20 18:59:46 compute-0 sudo[63999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:46 compute-0 python3.9[64001]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:46 compute-0 sudo[63999]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:46 compute-0 sudo[64122]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jccelppihxdkrqszcmvgolclsezzyxnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935585.778155-273-70952657120155/AnsiballZ_copy.py'
Jan 20 18:59:46 compute-0 sudo[64122]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:46 compute-0 python3.9[64124]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935585.778155-273-70952657120155/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:46 compute-0 sudo[64122]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:47 compute-0 sudo[64274]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmfckqgoycibuaxibkzhmcxjxqehuzir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935587.0776992-291-152278131858658/AnsiballZ_timezone.py'
Jan 20 18:59:47 compute-0 sudo[64274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:47 compute-0 python3.9[64276]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 18:59:47 compute-0 systemd[1]: Starting Time & Date Service...
Jan 20 18:59:47 compute-0 systemd[1]: Started Time & Date Service.
Jan 20 18:59:48 compute-0 sudo[64274]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:49 compute-0 sudo[64430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vldkzxikgrjdbxitlylanyfowmwfhfiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935589.071278-300-162592925041773/AnsiballZ_file.py'
Jan 20 18:59:49 compute-0 sudo[64430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:49 compute-0 python3.9[64432]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:49 compute-0 sudo[64430]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:50 compute-0 sudo[64582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cplvpntdxuxaevrbdpvbcyxhuqwvauvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935589.8355293-308-141311201081625/AnsiballZ_stat.py'
Jan 20 18:59:50 compute-0 sudo[64582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:50 compute-0 python3.9[64584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:50 compute-0 sudo[64582]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:50 compute-0 sudo[64705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixcechlinahujnjymbdlvujomduevide ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935589.8355293-308-141311201081625/AnsiballZ_copy.py'
Jan 20 18:59:50 compute-0 sudo[64705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:50 compute-0 python3.9[64707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935589.8355293-308-141311201081625/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:50 compute-0 sudo[64705]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:51 compute-0 sudo[64857]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jncymvqklxrbklaaxnjvseaoyssshrvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935591.0020988-323-111044625505526/AnsiballZ_stat.py'
Jan 20 18:59:51 compute-0 sudo[64857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:51 compute-0 python3.9[64859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:51 compute-0 sudo[64857]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:51 compute-0 sudo[64980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bfjouesicczurbeuiudykcptmddhyosk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935591.0020988-323-111044625505526/AnsiballZ_copy.py'
Jan 20 18:59:51 compute-0 sudo[64980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:51 compute-0 python3.9[64982]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768935591.0020988-323-111044625505526/.source.yaml _original_basename=.g8ej4bjq follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:51 compute-0 sudo[64980]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:52 compute-0 sudo[65132]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yzfarlvcqqelxssuprjzdllxlykkkzhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935592.1043568-338-211352668903382/AnsiballZ_stat.py'
Jan 20 18:59:52 compute-0 sudo[65132]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:52 compute-0 python3.9[65134]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:52 compute-0 sudo[65132]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:52 compute-0 sudo[65255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xaaosbjznfpzwcmzrfrxdkgwanhtlmtz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935592.1043568-338-211352668903382/AnsiballZ_copy.py'
Jan 20 18:59:52 compute-0 sudo[65255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:53 compute-0 python3.9[65257]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935592.1043568-338-211352668903382/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:53 compute-0 sudo[65255]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:53 compute-0 sudo[65407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqonigdfkcvdsfnewxlxwqbwxnnjaudz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935593.2551396-353-135129236262469/AnsiballZ_command.py'
Jan 20 18:59:53 compute-0 sudo[65407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:53 compute-0 python3.9[65409]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:59:53 compute-0 sudo[65407]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:54 compute-0 sudo[65560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hirxwzcumyuxqenwbjrqnnnjsffnzfew ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935593.8417158-361-272294300991460/AnsiballZ_command.py'
Jan 20 18:59:54 compute-0 sudo[65560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:54 compute-0 python3.9[65562]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 18:59:54 compute-0 sudo[65560]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:54 compute-0 sudo[65713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-metznwtnngdqjbrpjnewltqeyjazaphx ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935594.543192-369-201976100775874/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 18:59:54 compute-0 sudo[65713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:55 compute-0 python3[65715]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 18:59:55 compute-0 sudo[65713]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:55 compute-0 sudo[65865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mukjfschdoxayqfhpsuphokplpvezsup ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935595.4144685-377-157423801836268/AnsiballZ_stat.py'
Jan 20 18:59:55 compute-0 sudo[65865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:55 compute-0 python3.9[65867]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:55 compute-0 sudo[65865]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:56 compute-0 sudo[65988]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhpxffiljrkfndwekzusrtjggikhkxdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935595.4144685-377-157423801836268/AnsiballZ_copy.py'
Jan 20 18:59:56 compute-0 sudo[65988]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:56 compute-0 python3.9[65990]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935595.4144685-377-157423801836268/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:56 compute-0 sudo[65988]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:56 compute-0 sudo[66140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jewyqzlzposcxgqffiognoexxctbhqna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935596.6032147-392-90019042857631/AnsiballZ_stat.py'
Jan 20 18:59:56 compute-0 sudo[66140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:57 compute-0 python3.9[66142]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:57 compute-0 sudo[66140]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:57 compute-0 sudo[66263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxxyxhqkwfslehhatxksuqxknpgymsfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935596.6032147-392-90019042857631/AnsiballZ_copy.py'
Jan 20 18:59:57 compute-0 sudo[66263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:57 compute-0 python3.9[66265]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935596.6032147-392-90019042857631/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:57 compute-0 sudo[66263]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:57 compute-0 sudo[66415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lhsblpvlmvqggvhvxywcoeyahpukgqix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935597.753688-407-266459316976050/AnsiballZ_stat.py'
Jan 20 18:59:58 compute-0 sudo[66415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:58 compute-0 python3.9[66417]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:58 compute-0 sudo[66415]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:58 compute-0 sudo[66538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwlxkowsesgnpllytwklbdsxbcetufoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935597.753688-407-266459316976050/AnsiballZ_copy.py'
Jan 20 18:59:58 compute-0 sudo[66538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:58 compute-0 python3.9[66540]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935597.753688-407-266459316976050/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:58 compute-0 sudo[66538]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:59 compute-0 sudo[66690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkihzpsydgamwwrgbkmwkatlyafztvip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935598.8526566-422-80215271890943/AnsiballZ_stat.py'
Jan 20 18:59:59 compute-0 sudo[66690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:59 compute-0 python3.9[66692]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 18:59:59 compute-0 sudo[66690]: pam_unix(sudo:session): session closed for user root
Jan 20 18:59:59 compute-0 sudo[66813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpcqkmkzvothituqnrcvgozucxiqwbhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935598.8526566-422-80215271890943/AnsiballZ_copy.py'
Jan 20 18:59:59 compute-0 sudo[66813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 18:59:59 compute-0 python3.9[66815]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935598.8526566-422-80215271890943/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 18:59:59 compute-0 sudo[66813]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:00 compute-0 sudo[66965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsooadiguvwmhykvtpsqshlhhlqqgvlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935599.9296067-437-103043543290663/AnsiballZ_stat.py'
Jan 20 19:00:00 compute-0 sudo[66965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:00 compute-0 python3.9[66967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:00:00 compute-0 sudo[66965]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:00 compute-0 sudo[67088]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubruetldnmndtjnkvmmztucpfukjymt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935599.9296067-437-103043543290663/AnsiballZ_copy.py'
Jan 20 19:00:00 compute-0 sudo[67088]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:00 compute-0 python3.9[67090]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768935599.9296067-437-103043543290663/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:00 compute-0 sudo[67088]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:01 compute-0 sudo[67240]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xhiikgacbscnpqnnztrsoriiqaeaupza ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935601.1064584-452-208501277434225/AnsiballZ_file.py'
Jan 20 19:00:01 compute-0 sudo[67240]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:01 compute-0 python3.9[67242]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:01 compute-0 sudo[67240]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:01 compute-0 sudo[67392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyjhrucjfiepzewzikxtmgmfyncvvpcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935601.7260234-460-34538631173208/AnsiballZ_command.py'
Jan 20 19:00:01 compute-0 sudo[67392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:02 compute-0 python3.9[67394]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:02 compute-0 sudo[67392]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:02 compute-0 sudo[67551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayojukbykkcowcxzoexbcyletxorxhcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935602.4032524-468-186632393719382/AnsiballZ_blockinfile.py'
Jan 20 19:00:02 compute-0 sudo[67551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:02 compute-0 python3.9[67553]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:03 compute-0 sudo[67551]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:03 compute-0 sudo[67704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caxgqnbnutoqsxztlvckrdegenuthjrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935603.1914682-477-48939279965829/AnsiballZ_file.py'
Jan 20 19:00:03 compute-0 sudo[67704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:03 compute-0 python3.9[67706]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:03 compute-0 sudo[67704]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:04 compute-0 sudo[67856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efisaijpjzzxqssiziyhzpljujnevmhr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935603.8119166-477-235729747499418/AnsiballZ_file.py'
Jan 20 19:00:04 compute-0 sudo[67856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:04 compute-0 python3.9[67858]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:04 compute-0 sudo[67856]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:04 compute-0 sudo[68008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvmhbpjqwqwpipoxynbacpbpejtmlfmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935604.4467175-492-277651672840017/AnsiballZ_mount.py'
Jan 20 19:00:04 compute-0 sudo[68008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:05 compute-0 python3.9[68010]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 19:00:05 compute-0 sudo[68008]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:05 compute-0 sudo[68161]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzjopojsmywhvrotyrejwbwnyoexzmdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935605.228238-492-275508236484847/AnsiballZ_mount.py'
Jan 20 19:00:05 compute-0 sudo[68161]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:05 compute-0 python3.9[68163]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 19:00:05 compute-0 sudo[68161]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:06 compute-0 sshd-session[58960]: Connection closed by 192.168.122.30 port 59172
Jan 20 19:00:06 compute-0 sshd-session[58957]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:00:06 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 20 19:00:06 compute-0 systemd[1]: session-14.scope: Consumed 35.121s CPU time.
Jan 20 19:00:06 compute-0 systemd-logind[797]: Session 14 logged out. Waiting for processes to exit.
Jan 20 19:00:06 compute-0 systemd-logind[797]: Removed session 14.
Jan 20 19:00:11 compute-0 sshd-session[68189]: Accepted publickey for zuul from 192.168.122.30 port 43708 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:00:11 compute-0 systemd-logind[797]: New session 15 of user zuul.
Jan 20 19:00:11 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 20 19:00:11 compute-0 sshd-session[68189]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:00:11 compute-0 sudo[68342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axyfggbexxiobwtwprpgjgqbublvagjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935611.1081877-16-230919830895177/AnsiballZ_tempfile.py'
Jan 20 19:00:11 compute-0 sudo[68342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:11 compute-0 python3.9[68344]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 19:00:11 compute-0 sudo[68342]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:12 compute-0 sudo[68494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ptlsjxhylkzkzcgypaoyhsgugazfzvui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935611.9152837-28-210918119420192/AnsiballZ_stat.py'
Jan 20 19:00:12 compute-0 sudo[68494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:12 compute-0 python3.9[68496]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:00:12 compute-0 sudo[68494]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:13 compute-0 sudo[68646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdoqnhaswezramqskxvcfoicezktfkcb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935612.6581185-38-25805417860302/AnsiballZ_setup.py'
Jan 20 19:00:13 compute-0 sudo[68646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:13 compute-0 python3.9[68648]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:00:13 compute-0 sudo[68646]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:14 compute-0 sudo[68798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lunebqvdtvqhfpyhrgoartcgqglzytzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935613.6904328-47-43054380466306/AnsiballZ_blockinfile.py'
Jan 20 19:00:14 compute-0 sudo[68798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:14 compute-0 python3.9[68800]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz3b07HV3uJtYZS5SXFV7UOV5We+VhL7E4MInSTY31YDxLu74UtLEKRyupRLnE9d5cVG8e5JHiBt72dhLY2VbhACUUzWUR1aTUO/jAfEzM97GQgzgl5skY63LeYydonq3csjRREkj9YaliQuWdLTocUhfB/0t0HX525BkLTzTfdhjhDOY6NzeJUhZjMKy9uM/RZvITLdPgnYTjcLN12hAtWjUGKvAcUEfWpRW0efbUgaPSuNuRxZWXNuusp0UBopS1fv5P4Ea0VhwUmNZ0IJC3eljfUuHXRdQr6A4px/e8yVSwUILaYNL6ettCVX8HNvIxk6xmT5clWgr+Vibu+qnmAoOdOqoRYdZgH/26kU5ZMOYv8wpa/TUoXbD1ClrmNUQNjD4kSFXQtI1uhLxuNYTzf4ftLLy92oo3ENBg4Oph0Hw00CUPNDcsAgD65KYg8/Frjms4h8AUjYrV2ktrqAPVEvcItbD5e7/cAcF1AnB9aHpNzgUo1iUbMmXN2/I/fQ0=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5Jhg8QlHJt93+bopoKxGN+UwIsXQojyFhcp0nCuLCA
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNoSkRzTUMXF81nHL5zY2fe7DfBkbvi2MFoFs1WurMuV9pkgr/kpqf2yHrz5D04ncV4FFj7hs+/ZPi7NjXPcIw=
                                             create=True mode=0644 path=/tmp/ansible.p6ziftnk state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:14 compute-0 sudo[68798]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:14 compute-0 sudo[68950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idtpaocxfdngvwvwrjwalzdoiqwlweyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935614.3673735-55-149382884448117/AnsiballZ_command.py'
Jan 20 19:00:14 compute-0 sudo[68950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:14 compute-0 python3.9[68952]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.p6ziftnk' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:14 compute-0 sudo[68950]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:15 compute-0 sudo[69104]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imsdeimzinhrcwdtprgthatcfandyrdb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935615.0712485-63-51517412258293/AnsiballZ_file.py'
Jan 20 19:00:15 compute-0 sudo[69104]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:15 compute-0 python3.9[69106]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.p6ziftnk state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:15 compute-0 sudo[69104]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:15 compute-0 sshd-session[68192]: Connection closed by 192.168.122.30 port 43708
Jan 20 19:00:15 compute-0 sshd-session[68189]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:00:16 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 20 19:00:16 compute-0 systemd[1]: session-15.scope: Consumed 3.139s CPU time.
Jan 20 19:00:16 compute-0 systemd-logind[797]: Session 15 logged out. Waiting for processes to exit.
Jan 20 19:00:16 compute-0 systemd-logind[797]: Removed session 15.
Jan 20 19:00:17 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 19:00:20 compute-0 sshd-session[69134]: Accepted publickey for zuul from 192.168.122.30 port 59252 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:00:20 compute-0 systemd-logind[797]: New session 16 of user zuul.
Jan 20 19:00:20 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 20 19:00:20 compute-0 sshd-session[69134]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:00:21 compute-0 python3.9[69287]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:00:22 compute-0 sudo[69441]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hujwmatnbkdwjehoretddjksdnmnvklu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935622.2996242-27-14177966854590/AnsiballZ_systemd.py'
Jan 20 19:00:22 compute-0 sudo[69441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:23 compute-0 python3.9[69443]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 19:00:23 compute-0 sudo[69441]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:23 compute-0 sudo[69595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axodstpgwnpfibeukxspxynmbjdmqxkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935623.336406-35-8142763889296/AnsiballZ_systemd.py'
Jan 20 19:00:23 compute-0 sudo[69595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:23 compute-0 python3.9[69597]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:00:23 compute-0 sudo[69595]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:24 compute-0 sudo[69748]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whlmfukbivowvoiixadvtakebmjlahdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935624.1565027-44-234411430423002/AnsiballZ_command.py'
Jan 20 19:00:24 compute-0 sudo[69748]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:24 compute-0 python3.9[69750]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:24 compute-0 sudo[69748]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:25 compute-0 sudo[69901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcnabzywsnxjbpidrkeehclppqnqqntr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935624.8754158-52-200575042485136/AnsiballZ_stat.py'
Jan 20 19:00:25 compute-0 sudo[69901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:25 compute-0 python3.9[69903]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:00:25 compute-0 sudo[69901]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:25 compute-0 sudo[70055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gauuojovqgxhjzhwrdczsxpqqhpqaxnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935625.6322227-60-207373412951129/AnsiballZ_command.py'
Jan 20 19:00:25 compute-0 sudo[70055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:26 compute-0 python3.9[70057]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:26 compute-0 sudo[70055]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:26 compute-0 sudo[70210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhcatcrhgunafcvajzzriksarxeafkfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935626.29451-68-46298632721860/AnsiballZ_file.py'
Jan 20 19:00:26 compute-0 sudo[70210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:26 compute-0 python3.9[70212]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:26 compute-0 sudo[70210]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:27 compute-0 sshd-session[69137]: Connection closed by 192.168.122.30 port 59252
Jan 20 19:00:27 compute-0 sshd-session[69134]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:00:27 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 20 19:00:27 compute-0 systemd[1]: session-16.scope: Consumed 4.289s CPU time.
Jan 20 19:00:27 compute-0 systemd-logind[797]: Session 16 logged out. Waiting for processes to exit.
Jan 20 19:00:27 compute-0 systemd-logind[797]: Removed session 16.
Jan 20 19:00:27 compute-0 sshd-session[70237]: Invalid user solana from 45.148.10.240 port 49810
Jan 20 19:00:28 compute-0 sshd-session[70237]: Connection closed by invalid user solana 45.148.10.240 port 49810 [preauth]
Jan 20 19:00:32 compute-0 sshd-session[70239]: Accepted publickey for zuul from 192.168.122.30 port 50456 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:00:32 compute-0 systemd-logind[797]: New session 17 of user zuul.
Jan 20 19:00:32 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 20 19:00:32 compute-0 sshd-session[70239]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:00:33 compute-0 python3.9[70392]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:00:33 compute-0 sudo[70546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvwzucqgwgomgcveahulxfrujtfpywxt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935633.5829463-29-127760404206136/AnsiballZ_setup.py'
Jan 20 19:00:33 compute-0 sudo[70546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:34 compute-0 python3.9[70548]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:00:34 compute-0 sudo[70546]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:34 compute-0 sudo[70630]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psuwzvnrndfpdfcyzwrxsscmrocxwnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935633.5829463-29-127760404206136/AnsiballZ_dnf.py'
Jan 20 19:00:34 compute-0 sudo[70630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:34 compute-0 python3.9[70632]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 19:00:36 compute-0 sudo[70630]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:36 compute-0 python3.9[70783]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:38 compute-0 python3.9[70934]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 19:00:38 compute-0 python3.9[71084]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:00:38 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:00:38 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:00:39 compute-0 python3.9[71235]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:00:40 compute-0 sshd-session[70242]: Connection closed by 192.168.122.30 port 50456
Jan 20 19:00:40 compute-0 sshd-session[70239]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:00:40 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 20 19:00:40 compute-0 systemd[1]: session-17.scope: Consumed 5.914s CPU time.
Jan 20 19:00:40 compute-0 systemd-logind[797]: Session 17 logged out. Waiting for processes to exit.
Jan 20 19:00:40 compute-0 systemd-logind[797]: Removed session 17.
Jan 20 19:00:48 compute-0 sshd-session[71260]: Accepted publickey for zuul from 38.102.83.180 port 43946 ssh2: RSA SHA256:NUQhMT8WFYQNoBbXELd3vtykrkPErLT7OjFC/UP50jg
Jan 20 19:00:48 compute-0 systemd-logind[797]: New session 18 of user zuul.
Jan 20 19:00:48 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 20 19:00:48 compute-0 sshd-session[71260]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:00:49 compute-0 sudo[71336]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwsqalcoysrfhiebtznvpxbkoknksyew ; /usr/bin/python3'
Jan 20 19:00:49 compute-0 sudo[71336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:49 compute-0 useradd[71340]: new group: name=ceph-admin, GID=42478
Jan 20 19:00:49 compute-0 useradd[71340]: new user: name=ceph-admin, UID=42477, GID=42478, home=/home/ceph-admin, shell=/bin/bash, from=none
Jan 20 19:00:49 compute-0 sudo[71336]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:49 compute-0 sudo[71422]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-numzdwpihygjxkdhdggwfxqsmqwtwyeq ; /usr/bin/python3'
Jan 20 19:00:49 compute-0 sudo[71422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:49 compute-0 sudo[71422]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:50 compute-0 sudo[71495]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzoxuvueirxsicbnwaaeldvxlepffzpl ; /usr/bin/python3'
Jan 20 19:00:50 compute-0 sudo[71495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:50 compute-0 sudo[71495]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:50 compute-0 sudo[71545]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcemawpjoguktmrftukaiohozyekhunp ; /usr/bin/python3'
Jan 20 19:00:50 compute-0 sudo[71545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:50 compute-0 sudo[71545]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:51 compute-0 sudo[71571]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwiwtioultymtfugpprzsfrndfhrrsof ; /usr/bin/python3'
Jan 20 19:00:51 compute-0 sudo[71571]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:51 compute-0 sudo[71571]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:51 compute-0 sudo[71597]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktclxhukjktjwpxbgoovemwcldeqdwkc ; /usr/bin/python3'
Jan 20 19:00:51 compute-0 sudo[71597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:51 compute-0 sudo[71597]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:51 compute-0 sudo[71623]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vefrcrmynfsmgxypbcfdovwcbuxcvspr ; /usr/bin/python3'
Jan 20 19:00:51 compute-0 sudo[71623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:52 compute-0 sudo[71623]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:52 compute-0 sudo[71701]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfoibnkudncyiwqgoabnhyvkwgtjnijp ; /usr/bin/python3'
Jan 20 19:00:52 compute-0 sudo[71701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:52 compute-0 sudo[71701]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:52 compute-0 sudo[71774]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hizycyzugcdgpxqeyzzewfikqpfregaq ; /usr/bin/python3'
Jan 20 19:00:52 compute-0 sudo[71774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:52 compute-0 sudo[71774]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:53 compute-0 sudo[71876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iyhachrejzqiwojzwumxodlapgwubcse ; /usr/bin/python3'
Jan 20 19:00:53 compute-0 sudo[71876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:53 compute-0 sudo[71876]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:53 compute-0 sudo[71949]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brfshykjnlnyuotsfpjfocilxmyxwqkr ; /usr/bin/python3'
Jan 20 19:00:53 compute-0 sudo[71949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:53 compute-0 sudo[71949]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:54 compute-0 sudo[71999]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-npmypromlbhvfjzsqoskibxafbzubzmx ; /usr/bin/python3'
Jan 20 19:00:54 compute-0 sudo[71999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:54 compute-0 python3[72001]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:00:55 compute-0 sudo[71999]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:55 compute-0 sudo[72094]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krixpbzvjihqifyaclwfpswfbtqkyfev ; /usr/bin/python3'
Jan 20 19:00:55 compute-0 sudo[72094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:56 compute-0 python3[72096]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 19:00:57 compute-0 sudo[72094]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:57 compute-0 sudo[72121]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcbfenehyijsonnlybrpdpuwflcgoldx ; /usr/bin/python3'
Jan 20 19:00:57 compute-0 sudo[72121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:57 compute-0 python3[72123]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:00:57 compute-0 sudo[72121]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:57 compute-0 sudo[72147]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idvuvebvhyhgxlrenzwjgdhyrloeqnux ; /usr/bin/python3'
Jan 20 19:00:57 compute-0 sudo[72147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:57 compute-0 python3[72149]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G
                                          losetup /dev/loop3 /var/lib/ceph-osd-0.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:57 compute-0 kernel: loop: module loaded
Jan 20 19:00:57 compute-0 kernel: loop3: detected capacity change from 0 to 41943040
Jan 20 19:00:57 compute-0 sudo[72147]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:58 compute-0 sudo[72182]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ommpefttgpiifcjbhzrezynomgchmcqn ; /usr/bin/python3'
Jan 20 19:00:58 compute-0 sudo[72182]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:58 compute-0 python3[72184]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3
                                          vgcreate ceph_vg0 /dev/loop3
                                          lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:00:58 compute-0 lvm[72187]: PV /dev/loop3 not used.
Jan 20 19:00:58 compute-0 lvm[72196]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:00:58 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Jan 20 19:00:58 compute-0 lvm[72198]:   1 logical volume(s) in volume group "ceph_vg0" now active
Jan 20 19:00:58 compute-0 sudo[72182]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:58 compute-0 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Jan 20 19:00:58 compute-0 sudo[72274]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqlmhtqfopdhxokwjussgjpdaylpfrze ; /usr/bin/python3'
Jan 20 19:00:58 compute-0 sudo[72274]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:58 compute-0 python3[72276]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:00:58 compute-0 sudo[72274]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:59 compute-0 sudo[72347]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwspujkwfxajidyfbbvcpzoywmyebiek ; /usr/bin/python3'
Jan 20 19:00:59 compute-0 sudo[72347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:59 compute-0 python3[72349]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935658.701823-36189-20334441519892/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:00:59 compute-0 sudo[72347]: pam_unix(sudo:session): session closed for user root
Jan 20 19:00:59 compute-0 sudo[72397]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvtnqzxwxibtevyiiupucqwyuppzwctd ; /usr/bin/python3'
Jan 20 19:00:59 compute-0 sudo[72397]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:00:59 compute-0 python3[72399]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:01:00 compute-0 systemd[1]: Reloading.
Jan 20 19:01:00 compute-0 systemd-sysv-generator[72427]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:01:00 compute-0 systemd-rc-local-generator[72424]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:01:00 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 20 19:01:00 compute-0 bash[72440]: /dev/loop3: [64513]:4194935 (/var/lib/ceph-osd-0.img)
Jan 20 19:01:00 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 20 19:01:00 compute-0 lvm[72441]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:01:00 compute-0 lvm[72441]: VG ceph_vg0 finished
Jan 20 19:01:00 compute-0 sudo[72397]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:00 compute-0 sudo[72465]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojfjdqmcerqrhrzhtyrcmmawfwzvhprm ; /usr/bin/python3'
Jan 20 19:01:00 compute-0 sudo[72465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:00 compute-0 python3[72467]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 19:01:01 compute-0 CROND[72470]: (root) CMD (run-parts /etc/cron.hourly)
Jan 20 19:01:01 compute-0 run-parts[72473]: (/etc/cron.hourly) starting 0anacron
Jan 20 19:01:01 compute-0 anacron[72481]: Anacron started on 2026-01-20
Jan 20 19:01:01 compute-0 anacron[72481]: Will run job `cron.daily' in 37 min.
Jan 20 19:01:01 compute-0 anacron[72481]: Will run job `cron.weekly' in 57 min.
Jan 20 19:01:01 compute-0 anacron[72481]: Will run job `cron.monthly' in 77 min.
Jan 20 19:01:01 compute-0 anacron[72481]: Jobs will be executed sequentially
Jan 20 19:01:01 compute-0 run-parts[72483]: (/etc/cron.hourly) finished 0anacron
Jan 20 19:01:01 compute-0 CROND[72469]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 20 19:01:01 compute-0 sudo[72465]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:02 compute-0 sudo[72507]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjqxfxpmjouwopubwdhpvqagzlhsahel ; /usr/bin/python3'
Jan 20 19:01:02 compute-0 sudo[72507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:02 compute-0 python3[72509]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:01:02 compute-0 sudo[72507]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:02 compute-0 sudo[72533]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiuxqpfyuknunwakdwrakazrshjfoakj ; /usr/bin/python3'
Jan 20 19:01:02 compute-0 sudo[72533]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:02 compute-0 python3[72535]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G
                                          losetup /dev/loop4 /var/lib/ceph-osd-1.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:01:02 compute-0 kernel: loop4: detected capacity change from 0 to 41943040
Jan 20 19:01:02 compute-0 sudo[72533]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:02 compute-0 sudo[72565]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljihurywaapngwuaekysmpisflgozabc ; /usr/bin/python3'
Jan 20 19:01:02 compute-0 sudo[72565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:03 compute-0 python3[72567]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4
                                          vgcreate ceph_vg1 /dev/loop4
                                          lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:01:03 compute-0 lvm[72570]: PV /dev/loop4 not used.
Jan 20 19:01:03 compute-0 lvm[72572]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:01:03 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Jan 20 19:01:03 compute-0 lvm[72583]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:01:03 compute-0 lvm[72583]: VG ceph_vg1 finished
Jan 20 19:01:03 compute-0 lvm[72581]:   1 logical volume(s) in volume group "ceph_vg1" now active
Jan 20 19:01:03 compute-0 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Jan 20 19:01:03 compute-0 sudo[72565]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:03 compute-0 sudo[72659]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogozyqahhtetorqenuqjwldmpidujlwm ; /usr/bin/python3'
Jan 20 19:01:03 compute-0 sudo[72659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:03 compute-0 python3[72661]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:01:03 compute-0 sudo[72659]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:04 compute-0 sudo[72732]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcigvdcqxlzklixznxznzekxjpullpgt ; /usr/bin/python3'
Jan 20 19:01:04 compute-0 sudo[72732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:04 compute-0 python3[72734]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935663.487693-36231-256721410467085/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:04 compute-0 sudo[72732]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:04 compute-0 sudo[72782]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxyxbmdhnxbcfqjujtrrbbbektfuqnl ; /usr/bin/python3'
Jan 20 19:01:04 compute-0 sudo[72782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:04 compute-0 python3[72784]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:01:04 compute-0 systemd[1]: Reloading.
Jan 20 19:01:04 compute-0 systemd-sysv-generator[72819]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:01:04 compute-0 systemd-rc-local-generator[72811]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:01:05 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 20 19:01:05 compute-0 bash[72824]: /dev/loop4: [64513]:4328577 (/var/lib/ceph-osd-1.img)
Jan 20 19:01:05 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 20 19:01:05 compute-0 lvm[72825]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:01:05 compute-0 lvm[72825]: VG ceph_vg1 finished
Jan 20 19:01:05 compute-0 sudo[72782]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:05 compute-0 sudo[72849]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujwfmufakjppmuegsotyasupiqqxgrsc ; /usr/bin/python3'
Jan 20 19:01:05 compute-0 sudo[72849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:05 compute-0 python3[72851]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 19:01:06 compute-0 sudo[72849]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:06 compute-0 sudo[72876]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqwmorumpgbcmxqbvpnqwatboaqvppop ; /usr/bin/python3'
Jan 20 19:01:06 compute-0 sudo[72876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:07 compute-0 python3[72878]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:01:07 compute-0 sudo[72876]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:07 compute-0 sudo[72902]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkkdzhdutbhqjevvjeryfalxmixtquam ; /usr/bin/python3'
Jan 20 19:01:07 compute-0 sudo[72902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:07 compute-0 python3[72904]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G
                                          losetup /dev/loop5 /var/lib/ceph-osd-2.img
                                          lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:01:07 compute-0 kernel: loop5: detected capacity change from 0 to 41943040
Jan 20 19:01:07 compute-0 sudo[72902]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:07 compute-0 sudo[72934]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxdyxappidhgdvdhjwbjrijzrkjkbaay ; /usr/bin/python3'
Jan 20 19:01:07 compute-0 sudo[72934]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:07 compute-0 python3[72936]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5
                                          vgcreate ceph_vg2 /dev/loop5
                                          lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2
                                          lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:01:07 compute-0 lvm[72939]: PV /dev/loop5 not used.
Jan 20 19:01:08 compute-0 lvm[72949]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:01:08 compute-0 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Jan 20 19:01:08 compute-0 sudo[72934]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:08 compute-0 lvm[72951]:   1 logical volume(s) in volume group "ceph_vg2" now active
Jan 20 19:01:08 compute-0 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Jan 20 19:01:08 compute-0 sudo[73027]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ullntyiuywnwmbhiiprkzsmmxhceddmr ; /usr/bin/python3'
Jan 20 19:01:08 compute-0 sudo[73027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:08 compute-0 python3[73029]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:01:08 compute-0 sudo[73027]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:08 compute-0 sudo[73100]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plcknidgidkdjsyliyhpyiubhbpjywho ; /usr/bin/python3'
Jan 20 19:01:08 compute-0 sudo[73100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:08 compute-0 python3[73102]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935668.2830946-36258-245732525978810/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:09 compute-0 sudo[73100]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:09 compute-0 sudo[73150]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdpuumkdnbjtpgxrqwasyzbrhwmgqfrw ; /usr/bin/python3'
Jan 20 19:01:09 compute-0 sudo[73150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:09 compute-0 python3[73152]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:01:09 compute-0 systemd[1]: Reloading.
Jan 20 19:01:09 compute-0 systemd-rc-local-generator[73180]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:01:09 compute-0 systemd-sysv-generator[73185]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:01:09 compute-0 systemd[1]: Starting Ceph OSD losetup...
Jan 20 19:01:09 compute-0 bash[73192]: /dev/loop5: [64513]:4328578 (/var/lib/ceph-osd-2.img)
Jan 20 19:01:09 compute-0 systemd[1]: Finished Ceph OSD losetup.
Jan 20 19:01:09 compute-0 sudo[73150]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:09 compute-0 lvm[73193]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:01:09 compute-0 lvm[73193]: VG ceph_vg2 finished
Jan 20 19:01:11 compute-0 python3[73217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:01:13 compute-0 sudo[73308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtzoswhrqdkjihubfdpkkpbyqcllmqnk ; /usr/bin/python3'
Jan 20 19:01:13 compute-0 sudo[73308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:14 compute-0 python3[73310]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-tentacle'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 19:01:14 compute-0 chronyd[58476]: Selected source 167.160.187.179 (pool.ntp.org)
Jan 20 19:01:16 compute-0 sudo[73308]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:16 compute-0 sudo[73365]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqoqfkwgaxnngmojjfoabfmlkpenwahr ; /usr/bin/python3'
Jan 20 19:01:16 compute-0 sudo[73365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:16 compute-0 python3[73367]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 20 19:01:19 compute-0 groupadd[73377]: group added to /etc/group: name=cephadm, GID=993
Jan 20 19:01:19 compute-0 groupadd[73377]: group added to /etc/gshadow: name=cephadm
Jan 20 19:01:19 compute-0 groupadd[73377]: new group: name=cephadm, GID=993
Jan 20 19:01:19 compute-0 useradd[73384]: new user: name=cephadm, UID=992, GID=993, home=/var/lib/cephadm, shell=/bin/bash, from=none
Jan 20 19:01:20 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 19:01:20 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 19:01:20 compute-0 sudo[73365]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:20 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 19:01:20 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 19:01:20 compute-0 systemd[1]: run-ra74deb57212a4314bb94d8f7b5985e13.service: Deactivated successfully.
Jan 20 19:01:20 compute-0 sudo[73484]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nauzmzqvpuqurgvivdhmncihlnqpaupw ; /usr/bin/python3'
Jan 20 19:01:20 compute-0 sudo[73484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:20 compute-0 python3[73486]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:01:20 compute-0 sudo[73484]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:20 compute-0 sudo[73512]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nekhvbslediiypofhtcuawepqfolsgvk ; /usr/bin/python3'
Jan 20 19:01:20 compute-0 sudo[73512]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:21 compute-0 python3[73514]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:01:21 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:01:21 compute-0 sudo[73512]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:21 compute-0 sudo[73552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzzvvrvneqgeugnibenbqucvuequdrim ; /usr/bin/python3'
Jan 20 19:01:21 compute-0 sudo[73552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:21 compute-0 python3[73554]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:21 compute-0 sudo[73552]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:22 compute-0 sudo[73578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spwzivyjbixtxnndeyrjykamhchlhhur ; /usr/bin/python3'
Jan 20 19:01:22 compute-0 sudo[73578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:22 compute-0 python3[73580]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:22 compute-0 sudo[73578]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:23 compute-0 sudo[73656]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgdzwbwbutvltrbqsvvpkldirbvlhdxi ; /usr/bin/python3'
Jan 20 19:01:23 compute-0 sudo[73656]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:23 compute-0 python3[73658]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:01:23 compute-0 sudo[73656]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:23 compute-0 sudo[73729]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onffjpvwarlsxgljdyjvwzravlhphidh ; /usr/bin/python3'
Jan 20 19:01:23 compute-0 sudo[73729]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:23 compute-0 python3[73731]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935682.983403-36406-268475204916456/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:23 compute-0 sudo[73729]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:24 compute-0 sudo[73831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udjvcxkjrpyrliwkflinvuojulwijwoo ; /usr/bin/python3'
Jan 20 19:01:24 compute-0 sudo[73831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:24 compute-0 python3[73833]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:01:24 compute-0 sudo[73831]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:24 compute-0 sudo[73904]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znturwtlzbqigrqmfjajwqizllqeplog ; /usr/bin/python3'
Jan 20 19:01:24 compute-0 sudo[73904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:24 compute-0 python3[73906]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935684.1594503-36424-130976749764674/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:01:24 compute-0 sudo[73904]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:25 compute-0 sudo[73954]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjvywwrhpnglopagzwqktkjpqnsgaapq ; /usr/bin/python3'
Jan 20 19:01:25 compute-0 sudo[73954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:25 compute-0 python3[73956]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:01:25 compute-0 sudo[73954]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:25 compute-0 sudo[73982]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxihzfpafsryimemxsjnxvuvimrvfllb ; /usr/bin/python3'
Jan 20 19:01:25 compute-0 sudo[73982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:25 compute-0 python3[73984]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:01:25 compute-0 sudo[73982]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:25 compute-0 sudo[74010]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-taxunzchqvaqwqpwbbmhnpxruvmapxxg ; /usr/bin/python3'
Jan 20 19:01:25 compute-0 sudo[74010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:25 compute-0 python3[74012]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:01:25 compute-0 sudo[74010]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:26 compute-0 python3[74038]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:01:26 compute-0 sudo[74062]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovetytidjrieonbxapivwyxnqogugtuk ; /usr/bin/python3'
Jan 20 19:01:26 compute-0 sudo[74062]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:01:26 compute-0 python3[74064]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100
                                           _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:01:26 compute-0 sshd-session[74068]: Accepted publickey for ceph-admin from 192.168.122.100 port 36166 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:01:26 compute-0 systemd-logind[797]: New session 19 of user ceph-admin.
Jan 20 19:01:26 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 19:01:26 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 19:01:26 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 19:01:26 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 20 19:01:26 compute-0 systemd[74072]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:01:26 compute-0 systemd[74072]: Queued start job for default target Main User Target.
Jan 20 19:01:26 compute-0 systemd[74072]: Created slice User Application Slice.
Jan 20 19:01:26 compute-0 systemd[74072]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 19:01:26 compute-0 systemd[74072]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 19:01:26 compute-0 systemd[74072]: Reached target Paths.
Jan 20 19:01:26 compute-0 systemd[74072]: Reached target Timers.
Jan 20 19:01:26 compute-0 systemd[74072]: Starting D-Bus User Message Bus Socket...
Jan 20 19:01:26 compute-0 systemd[74072]: Starting Create User's Volatile Files and Directories...
Jan 20 19:01:26 compute-0 systemd[74072]: Listening on D-Bus User Message Bus Socket.
Jan 20 19:01:26 compute-0 systemd[74072]: Reached target Sockets.
Jan 20 19:01:26 compute-0 systemd[74072]: Finished Create User's Volatile Files and Directories.
Jan 20 19:01:26 compute-0 systemd[74072]: Reached target Basic System.
Jan 20 19:01:26 compute-0 systemd[74072]: Reached target Main User Target.
Jan 20 19:01:26 compute-0 systemd[74072]: Startup finished in 128ms.
Jan 20 19:01:26 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 20 19:01:26 compute-0 systemd[1]: Started Session 19 of User ceph-admin.
Jan 20 19:01:26 compute-0 sshd-session[74068]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:01:27 compute-0 sudo[74088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/echo
Jan 20 19:01:27 compute-0 sudo[74088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:01:27 compute-0 sudo[74088]: pam_unix(sudo:session): session closed for user root
Jan 20 19:01:27 compute-0 sshd-session[74087]: Received disconnect from 192.168.122.100 port 36166:11: disconnected by user
Jan 20 19:01:27 compute-0 sshd-session[74087]: Disconnected from user ceph-admin 192.168.122.100 port 36166
Jan 20 19:01:27 compute-0 sshd-session[74068]: pam_unix(sshd:session): session closed for user ceph-admin
Jan 20 19:01:27 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 20 19:01:27 compute-0 systemd-logind[797]: Session 19 logged out. Waiting for processes to exit.
Jan 20 19:01:27 compute-0 systemd-logind[797]: Removed session 19.
Jan 20 19:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:01:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:01:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat812151866-lower\x2dmapped.mount: Deactivated successfully.
Jan 20 19:01:37 compute-0 systemd[1]: Stopping User Manager for UID 42477...
Jan 20 19:01:37 compute-0 systemd[74072]: Activating special unit Exit the Session...
Jan 20 19:01:37 compute-0 systemd[74072]: Stopped target Main User Target.
Jan 20 19:01:37 compute-0 systemd[74072]: Stopped target Basic System.
Jan 20 19:01:37 compute-0 systemd[74072]: Stopped target Paths.
Jan 20 19:01:37 compute-0 systemd[74072]: Stopped target Sockets.
Jan 20 19:01:37 compute-0 systemd[74072]: Stopped target Timers.
Jan 20 19:01:37 compute-0 systemd[74072]: Stopped Mark boot as successful after the user session has run 2 minutes.
Jan 20 19:01:37 compute-0 systemd[74072]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 19:01:37 compute-0 systemd[74072]: Closed D-Bus User Message Bus Socket.
Jan 20 19:01:37 compute-0 systemd[74072]: Stopped Create User's Volatile Files and Directories.
Jan 20 19:01:37 compute-0 systemd[74072]: Removed slice User Application Slice.
Jan 20 19:01:37 compute-0 systemd[74072]: Reached target Shutdown.
Jan 20 19:01:37 compute-0 systemd[74072]: Finished Exit the Session.
Jan 20 19:01:37 compute-0 systemd[74072]: Reached target Exit the Session.
Jan 20 19:01:37 compute-0 systemd[1]: user@42477.service: Deactivated successfully.
Jan 20 19:01:37 compute-0 systemd[1]: Stopped User Manager for UID 42477.
Jan 20 19:01:37 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Jan 20 19:01:37 compute-0 systemd[1]: run-user-42477.mount: Deactivated successfully.
Jan 20 19:01:37 compute-0 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Jan 20 19:01:37 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Jan 20 19:01:37 compute-0 systemd[1]: Removed slice User Slice of UID 42477.
Jan 20 19:02:00 compute-0 podman[74166]: 2026-01-20 19:02:00.332834297 +0000 UTC m=+33.028037946 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[74232]: 2026-01-20 19:02:00.404016091 +0000 UTC m=+0.043177448 container create d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:00 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 20 19:02:00 compute-0 systemd[1]: Started libpod-conmon-d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a.scope.
Jan 20 19:02:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:00 compute-0 podman[74232]: 2026-01-20 19:02:00.381622078 +0000 UTC m=+0.020783465 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:00 compute-0 podman[74232]: 2026-01-20 19:02:00.503999664 +0000 UTC m=+0.143161051 container init d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:00 compute-0 podman[74232]: 2026-01-20 19:02:00.512020505 +0000 UTC m=+0.151181872 container start d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 19:02:00 compute-0 podman[74232]: 2026-01-20 19:02:00.521621203 +0000 UTC m=+0.160782590 container attach d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:00 compute-0 adoring_lichterman[74248]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 20 19:02:00 compute-0 systemd[1]: libpod-d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a.scope: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[74232]: 2026-01-20 19:02:00.610926041 +0000 UTC m=+0.250087408 container died d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-c863a7b0566fd79ba2055cf6570b60b23973452636fc0d389290f6aecd555258-merged.mount: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[74232]: 2026-01-20 19:02:00.692199637 +0000 UTC m=+0.331361004 container remove d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a (image=quay.io/ceph/ceph:v20, name=adoring_lichterman, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:00 compute-0 systemd[1]: libpod-conmon-d5333926e96a1d0200bc1ee2e5a99a8293173e48ffd2ea9980443860da96cb9a.scope: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[74268]: 2026-01-20 19:02:00.75823364 +0000 UTC m=+0.042537314 container create a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:00 compute-0 systemd[1]: Started libpod-conmon-a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53.scope.
Jan 20 19:02:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:00 compute-0 podman[74268]: 2026-01-20 19:02:00.814660664 +0000 UTC m=+0.098964338 container init a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:00 compute-0 podman[74268]: 2026-01-20 19:02:00.81994714 +0000 UTC m=+0.104250814 container start a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:02:00 compute-0 fervent_hofstadter[74284]: 167 167
Jan 20 19:02:00 compute-0 podman[74268]: 2026-01-20 19:02:00.82328459 +0000 UTC m=+0.107588294 container attach a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:02:00 compute-0 systemd[1]: libpod-a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53.scope: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[74268]: 2026-01-20 19:02:00.823891364 +0000 UTC m=+0.108195038 container died a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:02:00 compute-0 podman[74268]: 2026-01-20 19:02:00.740636451 +0000 UTC m=+0.024940145 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:00 compute-0 podman[74268]: 2026-01-20 19:02:00.859415951 +0000 UTC m=+0.143719625 container remove a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53 (image=quay.io/ceph/ceph:v20, name=fervent_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:00 compute-0 systemd[1]: libpod-conmon-a43e47380d27332845196592dff3f34ee893bdf3121124f987ef7bc0662d7d53.scope: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[74301]: 2026-01-20 19:02:00.913109789 +0000 UTC m=+0.036459189 container create 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:02:00 compute-0 systemd[1]: Started libpod-conmon-3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c.scope.
Jan 20 19:02:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:00 compute-0 podman[74301]: 2026-01-20 19:02:00.972636108 +0000 UTC m=+0.095985528 container init 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:02:00 compute-0 podman[74301]: 2026-01-20 19:02:00.976778285 +0000 UTC m=+0.100127675 container start 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:00 compute-0 podman[74301]: 2026-01-20 19:02:00.979746567 +0000 UTC m=+0.103095977 container attach 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:02:00 compute-0 podman[74301]: 2026-01-20 19:02:00.896495813 +0000 UTC m=+0.019845243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:00 compute-0 sweet_proskuriakova[74317]: AQAo0W9ppgNROxAAFxHumGgWP6WYQuYuigzcLw==
Jan 20 19:02:00 compute-0 systemd[1]: libpod-3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c.scope: Deactivated successfully.
Jan 20 19:02:00 compute-0 podman[74301]: 2026-01-20 19:02:00.9979091 +0000 UTC m=+0.121258520 container died 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:02:01 compute-0 podman[74301]: 2026-01-20 19:02:01.040177086 +0000 UTC m=+0.163526486 container remove 3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c (image=quay.io/ceph/ceph:v20, name=sweet_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True)
Jan 20 19:02:01 compute-0 systemd[1]: libpod-conmon-3a421ec44f9da8f54676fea9ab4efd49fd6986a9c291778d8db441790a0a242c.scope: Deactivated successfully.
Jan 20 19:02:01 compute-0 podman[74336]: 2026-01-20 19:02:01.094288365 +0000 UTC m=+0.037388751 container create 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:02:01 compute-0 systemd[1]: Started libpod-conmon-10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af.scope.
Jan 20 19:02:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:01 compute-0 podman[74336]: 2026-01-20 19:02:01.147642887 +0000 UTC m=+0.090743293 container init 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:01 compute-0 podman[74336]: 2026-01-20 19:02:01.152204895 +0000 UTC m=+0.095305281 container start 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:01 compute-0 podman[74336]: 2026-01-20 19:02:01.155471493 +0000 UTC m=+0.098571869 container attach 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:02:01 compute-0 sleepy_matsumoto[74352]: AQAp0W9pdA41ChAAUIj17MqnTKZ3KpckutYrKw==
Jan 20 19:02:01 compute-0 systemd[1]: libpod-10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af.scope: Deactivated successfully.
Jan 20 19:02:01 compute-0 podman[74336]: 2026-01-20 19:02:01.078247683 +0000 UTC m=+0.021348089 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:01 compute-0 podman[74336]: 2026-01-20 19:02:01.174782843 +0000 UTC m=+0.117883239 container died 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:01 compute-0 podman[74336]: 2026-01-20 19:02:01.213897085 +0000 UTC m=+0.156997471 container remove 10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af (image=quay.io/ceph/ceph:v20, name=sleepy_matsumoto, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:01 compute-0 systemd[1]: libpod-conmon-10b1875424d2e54e8d0c10aae1ce5028ec38f511511db16d2270f6dcbdd463af.scope: Deactivated successfully.
Jan 20 19:02:01 compute-0 podman[74372]: 2026-01-20 19:02:01.268835533 +0000 UTC m=+0.034837411 container create 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:02:01 compute-0 systemd[1]: Started libpod-conmon-184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c.scope.
Jan 20 19:02:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:01 compute-0 podman[74372]: 2026-01-20 19:02:01.2544064 +0000 UTC m=+0.020408298 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:01 compute-0 podman[74372]: 2026-01-20 19:02:01.731942405 +0000 UTC m=+0.497944303 container init 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 19:02:01 compute-0 podman[74372]: 2026-01-20 19:02:01.736756029 +0000 UTC m=+0.502757907 container start 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:02:01 compute-0 happy_wing[74389]: AQAp0W9paFQhLRAA4Qay79dbXeAWrHBkldsHBg==
Jan 20 19:02:01 compute-0 systemd[1]: libpod-184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c.scope: Deactivated successfully.
Jan 20 19:02:02 compute-0 podman[74372]: 2026-01-20 19:02:02.658821473 +0000 UTC m=+1.424823391 container attach 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:02 compute-0 podman[74372]: 2026-01-20 19:02:02.659430767 +0000 UTC m=+1.425432655 container died 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ce2845b4e62f0df10b99ee3c90aca1e9855972c38782fd1388d65621ffe9590a-merged.mount: Deactivated successfully.
Jan 20 19:02:02 compute-0 podman[74372]: 2026-01-20 19:02:02.702819322 +0000 UTC m=+1.468821200 container remove 184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c (image=quay.io/ceph/ceph:v20, name=happy_wing, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:02 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:02 compute-0 systemd[1]: libpod-conmon-184051a82c941bf8daa6d46f926c331a91dd5bf3e300bcfca2d1341e63b0b73c.scope: Deactivated successfully.
Jan 20 19:02:02 compute-0 podman[74412]: 2026-01-20 19:02:02.762639888 +0000 UTC m=+0.041639584 container create 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:02:02 compute-0 systemd[1]: Started libpod-conmon-05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31.scope.
Jan 20 19:02:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31ac80bd592b842cab8e57c94dd8e9212da275dd34a419e608cdcb2cf569d97f/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:02 compute-0 podman[74412]: 2026-01-20 19:02:02.830624248 +0000 UTC m=+0.109623974 container init 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:02:02 compute-0 podman[74412]: 2026-01-20 19:02:02.835399771 +0000 UTC m=+0.114399457 container start 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 19:02:02 compute-0 podman[74412]: 2026-01-20 19:02:02.741970625 +0000 UTC m=+0.020970341 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:02 compute-0 podman[74412]: 2026-01-20 19:02:02.83872738 +0000 UTC m=+0.117727076 container attach 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:02:02 compute-0 recursing_liskov[74430]: /usr/bin/monmaptool: monmap file /tmp/monmap
Jan 20 19:02:02 compute-0 recursing_liskov[74430]: setting min_mon_release = tentacle
Jan 20 19:02:02 compute-0 recursing_liskov[74430]: /usr/bin/monmaptool: set fsid to 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:02 compute-0 recursing_liskov[74430]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Jan 20 19:02:02 compute-0 systemd[1]: libpod-05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31.scope: Deactivated successfully.
Jan 20 19:02:02 compute-0 podman[74412]: 2026-01-20 19:02:02.868052698 +0000 UTC m=+0.147052394 container died 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:02:02 compute-0 podman[74412]: 2026-01-20 19:02:02.899825486 +0000 UTC m=+0.178825182 container remove 05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31 (image=quay.io/ceph/ceph:v20, name=recursing_liskov, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:02 compute-0 systemd[1]: libpod-conmon-05138b648179c9a27875bc7815dbffc4e3cb262d76193c89d75574809c8b3e31.scope: Deactivated successfully.
Jan 20 19:02:02 compute-0 podman[74448]: 2026-01-20 19:02:02.962448718 +0000 UTC m=+0.041076060 container create e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:02:03 compute-0 systemd[1]: Started libpod-conmon-e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef.scope.
Jan 20 19:02:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9abed47248996de709c45e42c24a0fca60440f96e3fbc1b99192244919e8260/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9abed47248996de709c45e42c24a0fca60440f96e3fbc1b99192244919e8260/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9abed47248996de709c45e42c24a0fca60440f96e3fbc1b99192244919e8260/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9abed47248996de709c45e42c24a0fca60440f96e3fbc1b99192244919e8260/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:03 compute-0 podman[74448]: 2026-01-20 19:02:03.027535278 +0000 UTC m=+0.106162610 container init e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:02:03 compute-0 podman[74448]: 2026-01-20 19:02:03.03223673 +0000 UTC m=+0.110864062 container start e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:03 compute-0 podman[74448]: 2026-01-20 19:02:03.035231122 +0000 UTC m=+0.113858454 container attach e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:02:03 compute-0 podman[74448]: 2026-01-20 19:02:02.942520592 +0000 UTC m=+0.021147944 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:03 compute-0 systemd[1]: libpod-e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef.scope: Deactivated successfully.
Jan 20 19:02:03 compute-0 podman[74448]: 2026-01-20 19:02:03.142496516 +0000 UTC m=+0.221123848 container died e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:03 compute-0 podman[74448]: 2026-01-20 19:02:03.180560763 +0000 UTC m=+0.259188105 container remove e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef (image=quay.io/ceph/ceph:v20, name=lucid_moser, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:02:03 compute-0 systemd[1]: libpod-conmon-e719a9b82e3f8e891d65c23a18f3e763bbb4f0ef634f8931114bfcd60753aaef.scope: Deactivated successfully.
Jan 20 19:02:03 compute-0 systemd[1]: Reloading.
Jan 20 19:02:03 compute-0 systemd-rc-local-generator[74532]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:03 compute-0 systemd-sysv-generator[74535]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-31ac80bd592b842cab8e57c94dd8e9212da275dd34a419e608cdcb2cf569d97f-merged.mount: Deactivated successfully.
Jan 20 19:02:03 compute-0 systemd[1]: Reloading.
Jan 20 19:02:03 compute-0 systemd-rc-local-generator[74569]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:03 compute-0 systemd-sysv-generator[74572]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:03 compute-0 systemd[1]: Reached target All Ceph clusters and services.
Jan 20 19:02:03 compute-0 systemd[1]: Reloading.
Jan 20 19:02:03 compute-0 systemd-rc-local-generator[74604]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:03 compute-0 systemd-sysv-generator[74611]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:03 compute-0 systemd[1]: Reached target Ceph cluster 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:02:04 compute-0 systemd[1]: Reloading.
Jan 20 19:02:04 compute-0 systemd-rc-local-generator[74646]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:04 compute-0 systemd-sysv-generator[74651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:04 compute-0 systemd[1]: Reloading.
Jan 20 19:02:04 compute-0 systemd-rc-local-generator[74681]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:04 compute-0 systemd-sysv-generator[74687]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:04 compute-0 systemd[1]: Created slice Slice /system/ceph-90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:02:04 compute-0 systemd[1]: Reached target System Time Set.
Jan 20 19:02:04 compute-0 systemd[1]: Reached target System Time Synchronized.
Jan 20 19:02:04 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:04 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:04 compute-0 podman[74745]: 2026-01-20 19:02:04.76390148 +0000 UTC m=+0.044087530 container create 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:04 compute-0 podman[74745]: 2026-01-20 19:02:04.824756541 +0000 UTC m=+0.104942631 container init 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:02:04 compute-0 podman[74745]: 2026-01-20 19:02:04.831784168 +0000 UTC m=+0.111970218 container start 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 20 19:02:04 compute-0 bash[74745]: 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48
Jan 20 19:02:04 compute-0 podman[74745]: 2026-01-20 19:02:04.743399003 +0000 UTC m=+0.023585053 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:04 compute-0 systemd[1]: Started Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:02:04 compute-0 ceph-mon[74764]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:02:04 compute-0 ceph-mon[74764]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: pidfile_write: ignore empty --pid-file
Jan 20 19:02:04 compute-0 ceph-mon[74764]: load: jerasure load: lrc 
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: RocksDB version: 7.9.2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Git sha 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: DB SUMMARY
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: DB Session ID:  LCRON4T8QIWEFDE4R6FR
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: CURRENT file:  CURRENT
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                         Options.error_if_exists: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                       Options.create_if_missing: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                                     Options.env: 0x55fb0fdfe440
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                                Options.info_log: 0x55fb1131b3e0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                              Options.statistics: (nil)
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                               Options.use_fsync: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                              Options.db_log_dir: 
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                                 Options.wal_dir: 
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                    Options.write_buffer_manager: 0x55fb1129a140
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.unordered_write: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                               Options.row_cache: None
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                              Options.wal_filter: None
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.two_write_queues: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.wal_compression: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.atomic_flush: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.max_background_jobs: 2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.max_background_compactions: -1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.max_subcompactions: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                          Options.max_open_files: -1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Compression algorithms supported:
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         kZSTD supported: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         kXpressCompression supported: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         kBZip2Compression supported: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         kLZ4Compression supported: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         kZlibCompression supported: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         kSnappyCompression supported: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:           Options.merge_operator: 
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:        Options.compaction_filter: None
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55fb112a6700)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55fb1128b8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:          Options.compression: NoCompression
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.num_levels: 7
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a47071cc-b77a-49b8-9d53-e31f11fbdebb
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935724885125, "job": 1, "event": "recovery_started", "wal_files": [4]}
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935724887419, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "LCRON4T8QIWEFDE4R6FR", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935724887518, "job": 1, "event": "recovery_finished"}
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55fb112b8e00
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: DB pointer 0x55fb11404000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:02:04 compute-0 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.14 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55fb1128b8d0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 19:02:04 compute-0 ceph-mon[74764]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@-1(???) e0 preinit fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(probing) e0 win_standalone_election
Jan 20 19:02:04 compute-0 ceph-mon[74764]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Jan 20 19:02:04 compute-0 podman[74765]: 2026-01-20 19:02:04.920549322 +0000 UTC m=+0.052257245 container create 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 20 19:02:04 compute-0 ceph-mon[74764]: paxos.0).electionLogic(2) init, last seen epoch 2
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T19:02:02.864397+0000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : created 2026-01-20T19:02:02.864397+0000
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,ceph_version_when_created=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v20,cpu=AMD EPYC-Rome Processor,created_at=2026-01-20T19:02:03.069482Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,os=Linux}
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout,17=tentacle ondisk layout}
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).mds e1 new map
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).mds e1 print_map
                                           e1
                                           btime 2026-01-20T19:02:04:930609+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mkfs 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 20 19:02:04 compute-0 ceph-mon[74764]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 20 19:02:04 compute-0 ceph-mon[74764]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 19:02:04 compute-0 systemd[1]: Started libpod-conmon-2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191.scope.
Jan 20 19:02:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d09f4975720db88c97c33b9a9fb79508bbafe3765418dd63bec7cab99db3b53/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d09f4975720db88c97c33b9a9fb79508bbafe3765418dd63bec7cab99db3b53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d09f4975720db88c97c33b9a9fb79508bbafe3765418dd63bec7cab99db3b53/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:04 compute-0 podman[74765]: 2026-01-20 19:02:04.899124902 +0000 UTC m=+0.030832845 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:05 compute-0 podman[74765]: 2026-01-20 19:02:05.004280027 +0000 UTC m=+0.135987950 container init 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:02:05 compute-0 podman[74765]: 2026-01-20 19:02:05.010970686 +0000 UTC m=+0.142678649 container start 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 19:02:05 compute-0 podman[74765]: 2026-01-20 19:02:05.015192297 +0000 UTC m=+0.146900240 container attach 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:02:05 compute-0 ceph-mon[74764]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 20 19:02:05 compute-0 ceph-mon[74764]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3950315037' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:   cluster:
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     id:     90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     health: HEALTH_OK
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:  
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:   services:
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     mon: 1 daemons, quorum compute-0 (age 0.267702s) [leader: compute-0]
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     mgr: no daemons active
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     osd: 0 osds: 0 up, 0 in
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:  
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:   data:
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     pools:   0 pools, 0 pgs
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     objects: 0 objects, 0 B
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     usage:   0 B used, 0 B / 0 B avail
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:     pgs:     
Jan 20 19:02:05 compute-0 pedantic_lumiere[74820]:  
Jan 20 19:02:05 compute-0 systemd[1]: libpod-2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191.scope: Deactivated successfully.
Jan 20 19:02:05 compute-0 podman[74765]: 2026-01-20 19:02:05.212872576 +0000 UTC m=+0.344580499 container died 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:05 compute-0 podman[74765]: 2026-01-20 19:02:05.252119221 +0000 UTC m=+0.383827144 container remove 2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191 (image=quay.io/ceph/ceph:v20, name=pedantic_lumiere, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:05 compute-0 systemd[1]: libpod-conmon-2199a56e9fc51b2201a423b6075c354fe5ea9b3e86e908182f506b705f370191.scope: Deactivated successfully.
Jan 20 19:02:05 compute-0 podman[74859]: 2026-01-20 19:02:05.318161054 +0000 UTC m=+0.044304666 container create 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:02:05 compute-0 systemd[1]: Started libpod-conmon-8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f.scope.
Jan 20 19:02:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:05 compute-0 podman[74859]: 2026-01-20 19:02:05.298762961 +0000 UTC m=+0.024906593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:05 compute-0 podman[74859]: 2026-01-20 19:02:05.614878551 +0000 UTC m=+0.341022173 container init 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:02:05 compute-0 podman[74859]: 2026-01-20 19:02:05.620476375 +0000 UTC m=+0.346619987 container start 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 19:02:05 compute-0 podman[74859]: 2026-01-20 19:02:05.62527855 +0000 UTC m=+0.351422192 container attach 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:02:05 compute-0 ceph-mon[74764]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 19:02:05 compute-0 ceph-mon[74764]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235203999' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 19:02:05 compute-0 ceph-mon[74764]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4235203999' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 19:02:05 compute-0 festive_cori[74875]: 
Jan 20 19:02:05 compute-0 festive_cori[74875]: [global]
Jan 20 19:02:05 compute-0 festive_cori[74875]:         fsid = 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:05 compute-0 festive_cori[74875]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 20 19:02:05 compute-0 festive_cori[74875]:         osd_crush_chooseleaf_type = 0
Jan 20 19:02:05 compute-0 systemd[1]: libpod-8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f.scope: Deactivated successfully.
Jan 20 19:02:05 compute-0 podman[74859]: 2026-01-20 19:02:05.830024817 +0000 UTC m=+0.556168439 container died 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-2c0c6af9c56424efa3e7d5bcc469a3bf7c0b6f83a47f126a1fe3794c84069d78-merged.mount: Deactivated successfully.
Jan 20 19:02:05 compute-0 podman[74859]: 2026-01-20 19:02:05.865893451 +0000 UTC m=+0.592037063 container remove 8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f (image=quay.io/ceph/ceph:v20, name=festive_cori, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 19:02:05 compute-0 systemd[1]: libpod-conmon-8dfef17c1fcd22138f2ea2c2d9ffc99ef5a2063b8b568148ee73046a80e9694f.scope: Deactivated successfully.
Jan 20 19:02:05 compute-0 podman[74913]: 2026-01-20 19:02:05.984099288 +0000 UTC m=+0.097802401 container create f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:02:05 compute-0 ceph-mon[74764]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 19:02:05 compute-0 ceph-mon[74764]: monmap epoch 1
Jan 20 19:02:05 compute-0 ceph-mon[74764]: fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:05 compute-0 ceph-mon[74764]: last_changed 2026-01-20T19:02:02.864397+0000
Jan 20 19:02:05 compute-0 ceph-mon[74764]: created 2026-01-20T19:02:02.864397+0000
Jan 20 19:02:05 compute-0 ceph-mon[74764]: min_mon_release 20 (tentacle)
Jan 20 19:02:05 compute-0 ceph-mon[74764]: election_strategy: 1
Jan 20 19:02:05 compute-0 ceph-mon[74764]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 19:02:05 compute-0 ceph-mon[74764]: fsmap 
Jan 20 19:02:05 compute-0 ceph-mon[74764]: osdmap e1: 0 total, 0 up, 0 in
Jan 20 19:02:05 compute-0 ceph-mon[74764]: mgrmap e1: no daemons active
Jan 20 19:02:05 compute-0 ceph-mon[74764]: from='client.? 192.168.122.100:0/3950315037' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 20 19:02:05 compute-0 ceph-mon[74764]: from='client.? 192.168.122.100:0/4235203999' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 19:02:05 compute-0 ceph-mon[74764]: from='client.? 192.168.122.100:0/4235203999' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 19:02:06 compute-0 podman[74913]: 2026-01-20 19:02:05.909107211 +0000 UTC m=+0.022810354 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:06 compute-0 systemd[1]: Started libpod-conmon-f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b.scope.
Jan 20 19:02:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:06 compute-0 podman[74913]: 2026-01-20 19:02:06.061161723 +0000 UTC m=+0.174864836 container init f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 19:02:06 compute-0 podman[74913]: 2026-01-20 19:02:06.065979818 +0000 UTC m=+0.179682921 container start f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:06 compute-0 podman[74913]: 2026-01-20 19:02:06.070321381 +0000 UTC m=+0.184024504 container attach f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:02:06 compute-0 ceph-mon[74764]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:02:06 compute-0 ceph-mon[74764]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2868618155' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:02:06 compute-0 systemd[1]: libpod-f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b.scope: Deactivated successfully.
Jan 20 19:02:06 compute-0 podman[74913]: 2026-01-20 19:02:06.272887857 +0000 UTC m=+0.386590990 container died f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbd320281a1e7e23cbb6cd85a263a578a9a28543c2a5d735a20e8f38b35b4fdf-merged.mount: Deactivated successfully.
Jan 20 19:02:06 compute-0 podman[74913]: 2026-01-20 19:02:06.458927119 +0000 UTC m=+0.572630272 container remove f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b (image=quay.io/ceph/ceph:v20, name=heuristic_galileo, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:02:06 compute-0 systemd[1]: Stopping Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:02:06 compute-0 systemd[1]: libpod-conmon-f063e50630790b14d1f624d4fa833c8f595f599289dd7f43c573e80c33f9cd9b.scope: Deactivated successfully.
Jan 20 19:02:06 compute-0 ceph-mon[74764]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 20 19:02:06 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0[74760]: 2026-01-20T19:02:06.679+0000 7f379ffc6640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Jan 20 19:02:06 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0[74760]: 2026-01-20T19:02:06.679+0000 7f379ffc6640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 20 19:02:06 compute-0 ceph-mon[74764]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Jan 20 19:02:06 compute-0 ceph-mon[74764]: mon.compute-0@0(leader) e1 shutdown
Jan 20 19:02:06 compute-0 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 19:02:06 compute-0 ceph-mon[74764]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 19:02:06 compute-0 podman[74997]: 2026-01-20 19:02:06.700039552 +0000 UTC m=+0.056737242 container stop 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:06 compute-0 podman[74997]: 2026-01-20 19:02:06.726021451 +0000 UTC m=+0.082719151 container died 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 20 19:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-8c7758296ded2ba9dfc7d6485a6598c3641ae7628376cf93ba34c54a9e40ee12-merged.mount: Deactivated successfully.
Jan 20 19:02:06 compute-0 podman[74997]: 2026-01-20 19:02:06.784171136 +0000 UTC m=+0.140868816 container remove 97101f8c87b2303b90eec3234d4634bcb6df2765144527ed263fd31320ac0a48 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:02:06 compute-0 bash[74997]: ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0
Jan 20 19:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:06 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 20 19:02:06 compute-0 systemd[1]: ceph-90fff835-31df-513f-a409-b6642f04e6ac@mon.compute-0.service: Deactivated successfully.
Jan 20 19:02:06 compute-0 systemd[1]: Stopped Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:02:06 compute-0 systemd[1]: Starting Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:02:07 compute-0 podman[75100]: 2026-01-20 19:02:07.13608465 +0000 UTC m=+0.040239540 container create b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65098b3d119dd06f2b0ad003613b56aa6789cb414d37b21e84cc1174543b7115/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65098b3d119dd06f2b0ad003613b56aa6789cb414d37b21e84cc1174543b7115/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65098b3d119dd06f2b0ad003613b56aa6789cb414d37b21e84cc1174543b7115/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65098b3d119dd06f2b0ad003613b56aa6789cb414d37b21e84cc1174543b7115/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 podman[75100]: 2026-01-20 19:02:07.194135402 +0000 UTC m=+0.098290302 container init b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030)
Jan 20 19:02:07 compute-0 podman[75100]: 2026-01-20 19:02:07.200985325 +0000 UTC m=+0.105140205 container start b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:07 compute-0 podman[75100]: 2026-01-20 19:02:07.119984626 +0000 UTC m=+0.024139536 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:07 compute-0 ceph-mon[75120]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mon, pid 2
Jan 20 19:02:07 compute-0 ceph-mon[75120]: pidfile_write: ignore empty --pid-file
Jan 20 19:02:07 compute-0 ceph-mon[75120]: load: jerasure load: lrc 
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: RocksDB version: 7.9.2
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Git sha 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: DB SUMMARY
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: DB Session ID:  09M3MP4DL9LGPOBMD17J
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: CURRENT file:  CURRENT
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 60239 ; 
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                         Options.error_if_exists: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                       Options.create_if_missing: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                                     Options.env: 0x55eae18a0440
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                                      Options.fs: PosixFileSystem
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                                Options.info_log: 0x55eae3cbfe80
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                              Options.statistics: (nil)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                               Options.use_fsync: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                              Options.db_log_dir: 
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                                 Options.wal_dir: 
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                    Options.write_buffer_manager: 0x55eae3d0a140
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.unordered_write: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                               Options.row_cache: None
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                              Options.wal_filter: None
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.two_write_queues: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.wal_compression: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.atomic_flush: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.max_background_jobs: 2
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.max_background_compactions: -1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.max_subcompactions: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.max_total_wal_size: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                          Options.max_open_files: -1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:       Options.compaction_readahead_size: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Compression algorithms supported:
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         kZSTD supported: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         kXpressCompression supported: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         kBZip2Compression supported: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         kLZ4Compression supported: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         kZlibCompression supported: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         kSnappyCompression supported: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:           Options.merge_operator: 
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:        Options.compaction_filter: None
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55eae3d16a00)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x55eae3cfb8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:        Options.write_buffer_size: 33554432
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:  Options.max_write_buffer_number: 2
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:          Options.compression: NoCompression
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.num_levels: 7
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: a47071cc-b77a-49b8-9d53-e31f11fbdebb
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935727243825, "job": 1, "event": "recovery_started", "wal_files": [9]}
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935727274709, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 59960, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 143, "table_properties": {"data_size": 58438, "index_size": 164, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3403, "raw_average_key_size": 30, "raw_value_size": 55790, "raw_average_value_size": 507, "num_data_blocks": 9, "num_entries": 110, "num_filter_entries": 110, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935727, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935727274856, "job": 1, "event": "recovery_finished"}
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Jan 20 19:02:07 compute-0 bash[75100]: b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681
Jan 20 19:02:07 compute-0 systemd[1]: Started Ceph mon.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55eae3d28e00
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: DB pointer 0x55eae3e72000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:02:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0   60.45 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Sum      2/0   60.45 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 1.33 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 1.33 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55eae3cfb8d0#2 capacity: 512.00 MB usage: 0.84 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 19:02:07 compute-0 ceph-mon[75120]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(???) e1 preinit fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(???).mds e1 new map
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(???).mds e1 print_map
                                           e1
                                           btime 2026-01-20T19:02:04:930609+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: -1
                                            
                                           No filesystems configured
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@0(probing) e1 win_standalone_election
Jan 20 19:02:07 compute-0 ceph-mon[75120]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : monmap epoch 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : last_changed 2026-01-20T19:02:02.864397+0000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : created 2026-01-20T19:02:02.864397+0000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : min_mon_release 20 (tentacle)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : election_strategy: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap 
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Jan 20 19:02:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Jan 20 19:02:07 compute-0 podman[75144]: 2026-01-20 19:02:07.410465445 +0000 UTC m=+0.111250971 container create c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: monmap epoch 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:07 compute-0 ceph-mon[75120]: last_changed 2026-01-20T19:02:02.864397+0000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: created 2026-01-20T19:02:02.864397+0000
Jan 20 19:02:07 compute-0 ceph-mon[75120]: min_mon_release 20 (tentacle)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: election_strategy: 1
Jan 20 19:02:07 compute-0 ceph-mon[75120]: 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Jan 20 19:02:07 compute-0 ceph-mon[75120]: fsmap 
Jan 20 19:02:07 compute-0 ceph-mon[75120]: osdmap e1: 0 total, 0 up, 0 in
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mgrmap e1: no daemons active
Jan 20 19:02:07 compute-0 podman[75144]: 2026-01-20 19:02:07.321988527 +0000 UTC m=+0.022774063 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:07 compute-0 systemd[1]: Started libpod-conmon-c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709.scope.
Jan 20 19:02:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0c526bf7c87fd2807b59dfe99f8be27fe0dd811e7d594ed63adb43004a84fb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0c526bf7c87fd2807b59dfe99f8be27fe0dd811e7d594ed63adb43004a84fb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a0c526bf7c87fd2807b59dfe99f8be27fe0dd811e7d594ed63adb43004a84fb/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 podman[75144]: 2026-01-20 19:02:07.494238741 +0000 UTC m=+0.195024277 container init c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:02:07 compute-0 podman[75144]: 2026-01-20 19:02:07.50803376 +0000 UTC m=+0.208819286 container start c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 19:02:07 compute-0 podman[75144]: 2026-01-20 19:02:07.512613358 +0000 UTC m=+0.213398904 container attach c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 19:02:07 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Jan 20 19:02:07 compute-0 systemd[1]: libpod-c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709.scope: Deactivated successfully.
Jan 20 19:02:07 compute-0 podman[75144]: 2026-01-20 19:02:07.725158062 +0000 UTC m=+0.425943578 container died c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:02:07 compute-0 podman[75144]: 2026-01-20 19:02:07.761533528 +0000 UTC m=+0.462319044 container remove c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709 (image=quay.io/ceph/ceph:v20, name=magical_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:02:07 compute-0 systemd[1]: libpod-conmon-c7bc25ea5ed53d83425b37538d1072c89254fefae5f704942a6f805e7fe70709.scope: Deactivated successfully.
Jan 20 19:02:07 compute-0 podman[75215]: 2026-01-20 19:02:07.852657899 +0000 UTC m=+0.057785987 container create 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:02:07 compute-0 systemd[1]: Started libpod-conmon-25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28.scope.
Jan 20 19:02:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef8e9da0542087ebd761d07079ae0621998993f749b462663d90391e1195861/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef8e9da0542087ebd761d07079ae0621998993f749b462663d90391e1195861/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ef8e9da0542087ebd761d07079ae0621998993f749b462663d90391e1195861/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:07 compute-0 podman[75215]: 2026-01-20 19:02:07.925104435 +0000 UTC m=+0.130232543 container init 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:07 compute-0 podman[75215]: 2026-01-20 19:02:07.831626288 +0000 UTC m=+0.036754376 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:07 compute-0 podman[75215]: 2026-01-20 19:02:07.930101534 +0000 UTC m=+0.135229632 container start 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:02:07 compute-0 podman[75215]: 2026-01-20 19:02:07.933785471 +0000 UTC m=+0.138913569 container attach 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:02:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Jan 20 19:02:08 compute-0 systemd[1]: libpod-25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28.scope: Deactivated successfully.
Jan 20 19:02:08 compute-0 podman[75215]: 2026-01-20 19:02:08.174326932 +0000 UTC m=+0.379455020 container died 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:02:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-8ef8e9da0542087ebd761d07079ae0621998993f749b462663d90391e1195861-merged.mount: Deactivated successfully.
Jan 20 19:02:08 compute-0 podman[75215]: 2026-01-20 19:02:08.318639509 +0000 UTC m=+0.523767627 container remove 25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28 (image=quay.io/ceph/ceph:v20, name=nifty_borg, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:08 compute-0 systemd[1]: libpod-conmon-25d086c795b4ddde0be68262af5c13bc21caa819edf59e0f760ab0a765400a28.scope: Deactivated successfully.
Jan 20 19:02:08 compute-0 systemd[1]: Reloading.
Jan 20 19:02:08 compute-0 systemd-sysv-generator[75298]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:08 compute-0 systemd-rc-local-generator[75293]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:08 compute-0 systemd[1]: Reloading.
Jan 20 19:02:08 compute-0 systemd-rc-local-generator[75340]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:08 compute-0 systemd-sysv-generator[75344]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:08 compute-0 systemd[1]: Starting Ceph mgr.compute-0.meyjbf for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:02:09 compute-0 podman[75398]: 2026-01-20 19:02:09.185872799 +0000 UTC m=+0.037870604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:09 compute-0 podman[75398]: 2026-01-20 19:02:09.473477029 +0000 UTC m=+0.325474794 container create 60642dffa907a68ef49dd0ef246239786fb490af2161d3f9f8a813106e21468e (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4c759c859e30a4aed3ad7d3db505e494141cb5e9ce5dc8d1e931b5889ce0f0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4c759c859e30a4aed3ad7d3db505e494141cb5e9ce5dc8d1e931b5889ce0f0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4c759c859e30a4aed3ad7d3db505e494141cb5e9ce5dc8d1e931b5889ce0f0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e4c759c859e30a4aed3ad7d3db505e494141cb5e9ce5dc8d1e931b5889ce0f0/merged/var/lib/ceph/mgr/ceph-compute-0.meyjbf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:09 compute-0 podman[75398]: 2026-01-20 19:02:09.559014517 +0000 UTC m=+0.411012382 container init 60642dffa907a68ef49dd0ef246239786fb490af2161d3f9f8a813106e21468e (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:09 compute-0 podman[75398]: 2026-01-20 19:02:09.565957622 +0000 UTC m=+0.417955427 container start 60642dffa907a68ef49dd0ef246239786fb490af2161d3f9f8a813106e21468e (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:09 compute-0 bash[75398]: 60642dffa907a68ef49dd0ef246239786fb490af2161d3f9f8a813106e21468e
Jan 20 19:02:09 compute-0 systemd[1]: Started Ceph mgr.compute-0.meyjbf for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:02:09 compute-0 ceph-mgr[75417]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:02:09 compute-0 ceph-mgr[75417]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 20 19:02:09 compute-0 ceph-mgr[75417]: pidfile_write: ignore empty --pid-file
Jan 20 19:02:09 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'alerts'
Jan 20 19:02:09 compute-0 podman[75420]: 2026-01-20 19:02:09.743583324 +0000 UTC m=+0.107396840 container create aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:09 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'balancer'
Jan 20 19:02:09 compute-0 podman[75420]: 2026-01-20 19:02:09.67375989 +0000 UTC m=+0.037573466 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:09 compute-0 systemd[1]: Started libpod-conmon-aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3.scope.
Jan 20 19:02:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4318c200c7d61d2e60ffb8ece0e87cd654e8407624419b9b39c862eb692e3ed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4318c200c7d61d2e60ffb8ece0e87cd654e8407624419b9b39c862eb692e3ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e4318c200c7d61d2e60ffb8ece0e87cd654e8407624419b9b39c862eb692e3ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:09 compute-0 podman[75420]: 2026-01-20 19:02:09.841579558 +0000 UTC m=+0.205393154 container init aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:02:09 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'cephadm'
Jan 20 19:02:09 compute-0 podman[75420]: 2026-01-20 19:02:09.855268455 +0000 UTC m=+0.219081971 container start aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:02:09 compute-0 podman[75420]: 2026-01-20 19:02:09.859117896 +0000 UTC m=+0.222931442 container attach aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 19:02:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 19:02:10 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3058033490' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:10 compute-0 modest_lalande[75456]: 
Jan 20 19:02:10 compute-0 modest_lalande[75456]: {
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "health": {
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "status": "HEALTH_OK",
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "checks": {},
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "mutes": []
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     },
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "election_epoch": 5,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "quorum": [
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         0
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     ],
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "quorum_names": [
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "compute-0"
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     ],
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "quorum_age": 2,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "monmap": {
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "epoch": 1,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "min_mon_release_name": "tentacle",
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_mons": 1
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     },
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "osdmap": {
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "epoch": 1,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_osds": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_up_osds": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "osd_up_since": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_in_osds": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "osd_in_since": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_remapped_pgs": 0
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     },
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "pgmap": {
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "pgs_by_state": [],
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_pgs": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_pools": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_objects": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "data_bytes": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "bytes_used": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "bytes_avail": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "bytes_total": 0
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     },
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "fsmap": {
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "epoch": 1,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "by_rank": [],
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "up:standby": 0
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     },
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "mgrmap": {
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "available": false,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "num_standbys": 0,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "modules": [
Jan 20 19:02:10 compute-0 modest_lalande[75456]:             "iostat",
Jan 20 19:02:10 compute-0 modest_lalande[75456]:             "nfs"
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         ],
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "services": {}
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     },
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "servicemap": {
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "epoch": 1,
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 19:02:10 compute-0 modest_lalande[75456]:         "services": {}
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     },
Jan 20 19:02:10 compute-0 modest_lalande[75456]:     "progress_events": {}
Jan 20 19:02:10 compute-0 modest_lalande[75456]: }
Jan 20 19:02:10 compute-0 systemd[1]: libpod-aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3.scope: Deactivated successfully.
Jan 20 19:02:10 compute-0 podman[75420]: 2026-01-20 19:02:10.089392181 +0000 UTC m=+0.453205707 container died aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e4318c200c7d61d2e60ffb8ece0e87cd654e8407624419b9b39c862eb692e3ed-merged.mount: Deactivated successfully.
Jan 20 19:02:10 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3058033490' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:10 compute-0 podman[75420]: 2026-01-20 19:02:10.131798141 +0000 UTC m=+0.495611657 container remove aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3 (image=quay.io/ceph/ceph:v20, name=modest_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:10 compute-0 systemd[1]: libpod-conmon-aa7d4ed1c397b043f6e56a84232d64a4adf5143e6911d475db14032e2ccb6db3.scope: Deactivated successfully.
Jan 20 19:02:10 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'crash'
Jan 20 19:02:10 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'dashboard'
Jan 20 19:02:11 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'devicehealth'
Jan 20 19:02:11 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 19:02:11 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 19:02:11 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 19:02:11 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]:   from numpy import show_config as show_numpy_config
Jan 20 19:02:11 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'influx'
Jan 20 19:02:11 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'insights'
Jan 20 19:02:11 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'iostat'
Jan 20 19:02:12 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'k8sevents'
Jan 20 19:02:12 compute-0 podman[75504]: 2026-01-20 19:02:12.179847189 +0000 UTC m=+0.020196783 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:12 compute-0 podman[75504]: 2026-01-20 19:02:12.316463653 +0000 UTC m=+0.156813217 container create 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:12 compute-0 systemd[1]: Started libpod-conmon-2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f.scope.
Jan 20 19:02:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774160578ab6233277f72f1066c8c0dd0c3da0dd2d9b0527df11f44ddb81be6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774160578ab6233277f72f1066c8c0dd0c3da0dd2d9b0527df11f44ddb81be6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/774160578ab6233277f72f1066c8c0dd0c3da0dd2d9b0527df11f44ddb81be6c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:12 compute-0 podman[75504]: 2026-01-20 19:02:12.412977732 +0000 UTC m=+0.253327326 container init 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:12 compute-0 podman[75504]: 2026-01-20 19:02:12.417956011 +0000 UTC m=+0.258305585 container start 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:02:12 compute-0 podman[75504]: 2026-01-20 19:02:12.42460718 +0000 UTC m=+0.264956774 container attach 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:12 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'localpool'
Jan 20 19:02:12 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 19:02:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 19:02:12 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79746608' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]: 
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]: {
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "health": {
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "status": "HEALTH_OK",
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "checks": {},
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "mutes": []
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     },
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "election_epoch": 5,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "quorum": [
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         0
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     ],
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "quorum_names": [
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "compute-0"
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     ],
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "quorum_age": 5,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "monmap": {
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "epoch": 1,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "min_mon_release_name": "tentacle",
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_mons": 1
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     },
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "osdmap": {
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "epoch": 1,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_osds": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_up_osds": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "osd_up_since": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_in_osds": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "osd_in_since": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_remapped_pgs": 0
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     },
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "pgmap": {
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "pgs_by_state": [],
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_pgs": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_pools": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_objects": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "data_bytes": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "bytes_used": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "bytes_avail": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "bytes_total": 0
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     },
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "fsmap": {
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "epoch": 1,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "by_rank": [],
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "up:standby": 0
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     },
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "mgrmap": {
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "available": false,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "num_standbys": 0,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "modules": [
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:             "iostat",
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:             "nfs"
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         ],
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "services": {}
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     },
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "servicemap": {
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "epoch": 1,
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:         "services": {}
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     },
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]:     "progress_events": {}
Jan 20 19:02:12 compute-0 optimistic_hofstadter[75519]: }
Jan 20 19:02:12 compute-0 systemd[1]: libpod-2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f.scope: Deactivated successfully.
Jan 20 19:02:12 compute-0 podman[75504]: 2026-01-20 19:02:12.633411994 +0000 UTC m=+0.473761568 container died 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:02:12 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/79746608' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-774160578ab6233277f72f1066c8c0dd0c3da0dd2d9b0527df11f44ddb81be6c-merged.mount: Deactivated successfully.
Jan 20 19:02:12 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'mirroring'
Jan 20 19:02:12 compute-0 podman[75504]: 2026-01-20 19:02:12.777707841 +0000 UTC m=+0.618057435 container remove 2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f (image=quay.io/ceph/ceph:v20, name=optimistic_hofstadter, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:02:12 compute-0 systemd[1]: libpod-conmon-2e47570148a488fccd40fad7ae48dcea40df11279cd49b7f5255468332ba654f.scope: Deactivated successfully.
Jan 20 19:02:12 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'nfs'
Jan 20 19:02:13 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'orchestrator'
Jan 20 19:02:13 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 19:02:13 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'osd_support'
Jan 20 19:02:13 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 19:02:13 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'progress'
Jan 20 19:02:13 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'prometheus'
Jan 20 19:02:14 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'rbd_support'
Jan 20 19:02:14 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'rgw'
Jan 20 19:02:14 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'rook'
Jan 20 19:02:14 compute-0 podman[75560]: 2026-01-20 19:02:14.880690447 +0000 UTC m=+0.075335856 container create 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:02:14 compute-0 systemd[1]: Started libpod-conmon-01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36.scope.
Jan 20 19:02:14 compute-0 podman[75560]: 2026-01-20 19:02:14.82963082 +0000 UTC m=+0.024276249 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bcf4f3f666a21474e9f7befb2d3dfcbfc465a1669fc49e2accde339c599e1f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bcf4f3f666a21474e9f7befb2d3dfcbfc465a1669fc49e2accde339c599e1f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bcf4f3f666a21474e9f7befb2d3dfcbfc465a1669fc49e2accde339c599e1f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:14 compute-0 podman[75560]: 2026-01-20 19:02:14.968335204 +0000 UTC m=+0.162980633 container init 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:02:14 compute-0 podman[75560]: 2026-01-20 19:02:14.973127879 +0000 UTC m=+0.167773278 container start 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:02:14 compute-0 podman[75560]: 2026-01-20 19:02:14.976463358 +0000 UTC m=+0.171108777 container attach 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:15 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'selftest'
Jan 20 19:02:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 19:02:15 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3413646804' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]: 
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]: {
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "health": {
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "status": "HEALTH_OK",
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "checks": {},
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "mutes": []
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     },
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "election_epoch": 5,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "quorum": [
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         0
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     ],
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "quorum_names": [
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "compute-0"
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     ],
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "quorum_age": 7,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "monmap": {
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "epoch": 1,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "min_mon_release_name": "tentacle",
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_mons": 1
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     },
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "osdmap": {
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "epoch": 1,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_osds": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_up_osds": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "osd_up_since": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_in_osds": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "osd_in_since": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_remapped_pgs": 0
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     },
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "pgmap": {
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "pgs_by_state": [],
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_pgs": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_pools": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_objects": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "data_bytes": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "bytes_used": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "bytes_avail": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "bytes_total": 0
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     },
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "fsmap": {
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "epoch": 1,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "by_rank": [],
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "up:standby": 0
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     },
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "mgrmap": {
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "available": false,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "num_standbys": 0,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "modules": [
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:             "iostat",
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:             "nfs"
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         ],
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "services": {}
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     },
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "servicemap": {
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "epoch": 1,
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:         "services": {}
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     },
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]:     "progress_events": {}
Jan 20 19:02:15 compute-0 eloquent_mestorf[75577]: }
Jan 20 19:02:15 compute-0 systemd[1]: libpod-01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36.scope: Deactivated successfully.
Jan 20 19:02:15 compute-0 podman[75560]: 2026-01-20 19:02:15.178553583 +0000 UTC m=+0.373198972 container died 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 19:02:15 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'smb'
Jan 20 19:02:15 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3413646804' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-32bcf4f3f666a21474e9f7befb2d3dfcbfc465a1669fc49e2accde339c599e1f-merged.mount: Deactivated successfully.
Jan 20 19:02:15 compute-0 podman[75560]: 2026-01-20 19:02:15.284382303 +0000 UTC m=+0.479027702 container remove 01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36 (image=quay.io/ceph/ceph:v20, name=eloquent_mestorf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:02:15 compute-0 systemd[1]: libpod-conmon-01d99e61591ddc3d364ed947e4f917bff6b531e2158660aaacd4c5d7dfde0c36.scope: Deactivated successfully.
Jan 20 19:02:15 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'snap_schedule'
Jan 20 19:02:15 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'stats'
Jan 20 19:02:15 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'status'
Jan 20 19:02:15 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'telegraf'
Jan 20 19:02:15 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'telemetry'
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'volumes'
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: ms_deliver_dispatch: unhandled message 0x5595805a9860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.meyjbf
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr handle_mgr_map Activating!
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr handle_mgr_map I am now activating
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.meyjbf(active, starting, since 0.0115033s)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: balancer
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: crash
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [balancer INFO root] Starting
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Manager daemon compute-0.meyjbf is now available
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:02:16
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [balancer INFO root] No pools available
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: devicehealth
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [devicehealth INFO root] Starting
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: iostat
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: nfs
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: orchestrator
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: pg_autoscaler
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: progress
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [progress INFO root] Loading...
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [progress INFO root] No stored events to load
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [progress INFO root] Loaded [] historic events
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] recovery thread starting
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] starting setup
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: rbd_support
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: status
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: telemetry
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] PerfHandler: starting
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TaskHandler: starting
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: [rbd_support INFO root] setup complete
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: Activating manager daemon compute-0.meyjbf
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mgrmap e2: compute-0.meyjbf(active, starting, since 0.0115033s)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: Manager daemon compute-0.meyjbf is now available
Jan 20 19:02:16 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} : dispatch
Jan 20 19:02:16 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: volumes
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Jan 20 19:02:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:17 compute-0 podman[75693]: 2026-01-20 19:02:17.376852318 +0000 UTC m=+0.071196536 container create fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:02:17 compute-0 systemd[1]: Started libpod-conmon-fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3.scope.
Jan 20 19:02:17 compute-0 podman[75693]: 2026-01-20 19:02:17.328815644 +0000 UTC m=+0.023159862 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a932ec66600f495820ae1fa0fd2ee84ee6bdc780823753448bd4a5659da38629/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a932ec66600f495820ae1fa0fd2ee84ee6bdc780823753448bd4a5659da38629/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a932ec66600f495820ae1fa0fd2ee84ee6bdc780823753448bd4a5659da38629/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:17 compute-0 podman[75693]: 2026-01-20 19:02:17.442874482 +0000 UTC m=+0.137218720 container init fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:02:17 compute-0 podman[75693]: 2026-01-20 19:02:17.447650765 +0000 UTC m=+0.141994973 container start fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:17 compute-0 podman[75693]: 2026-01-20 19:02:17.451730543 +0000 UTC m=+0.146074791 container attach fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:02:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 19:02:17 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/895259608' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:17 compute-0 youthful_williamson[75709]: 
Jan 20 19:02:17 compute-0 youthful_williamson[75709]: {
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "health": {
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "status": "HEALTH_OK",
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "checks": {},
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "mutes": []
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     },
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "election_epoch": 5,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "quorum": [
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         0
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     ],
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "quorum_names": [
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "compute-0"
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     ],
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "quorum_age": 10,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "monmap": {
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "epoch": 1,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "min_mon_release_name": "tentacle",
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_mons": 1
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     },
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "osdmap": {
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "epoch": 1,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_osds": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_up_osds": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "osd_up_since": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_in_osds": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "osd_in_since": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_remapped_pgs": 0
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     },
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "pgmap": {
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "pgs_by_state": [],
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_pgs": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_pools": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_objects": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "data_bytes": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "bytes_used": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "bytes_avail": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "bytes_total": 0
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     },
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "fsmap": {
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "epoch": 1,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "by_rank": [],
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "up:standby": 0
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     },
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "mgrmap": {
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "available": false,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "num_standbys": 0,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "modules": [
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:             "iostat",
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:             "nfs"
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         ],
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "services": {}
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     },
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "servicemap": {
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "epoch": 1,
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:         "services": {}
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     },
Jan 20 19:02:17 compute-0 youthful_williamson[75709]:     "progress_events": {}
Jan 20 19:02:17 compute-0 youthful_williamson[75709]: }
Jan 20 19:02:17 compute-0 systemd[1]: libpod-fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3.scope: Deactivated successfully.
Jan 20 19:02:17 compute-0 podman[75693]: 2026-01-20 19:02:17.930965948 +0000 UTC m=+0.625310166 container died fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 19:02:18 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.meyjbf(active, since 1.48832s)
Jan 20 19:02:18 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:18 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:18 compute-0 ceph-mon[75120]: from='mgr.14102 192.168.122.100:0/633790848' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/895259608' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-a932ec66600f495820ae1fa0fd2ee84ee6bdc780823753448bd4a5659da38629-merged.mount: Deactivated successfully.
Jan 20 19:02:18 compute-0 podman[75693]: 2026-01-20 19:02:18.140830418 +0000 UTC m=+0.835174636 container remove fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3 (image=quay.io/ceph/ceph:v20, name=youthful_williamson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:18 compute-0 systemd[1]: libpod-conmon-fa8a94ce3962a0027110f74c11f9d9ddb205ffb25862ef24f215cda47aa276d3.scope: Deactivated successfully.
Jan 20 19:02:18 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:18 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:19 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.meyjbf(active, since 2s)
Jan 20 19:02:19 compute-0 ceph-mon[75120]: mgrmap e3: compute-0.meyjbf(active, since 1.48832s)
Jan 20 19:02:20 compute-0 podman[75748]: 2026-01-20 19:02:20.187580546 +0000 UTC m=+0.023236609 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:20 compute-0 podman[75748]: 2026-01-20 19:02:20.416136704 +0000 UTC m=+0.251792757 container create 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:02:20 compute-0 ceph-mon[75120]: mgrmap e4: compute-0.meyjbf(active, since 2s)
Jan 20 19:02:20 compute-0 systemd[1]: Started libpod-conmon-570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1.scope.
Jan 20 19:02:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437ab07b79c271288dcd7579d25188231864983bf937bbe0a11edc5b8e4f5fec/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437ab07b79c271288dcd7579d25188231864983bf937bbe0a11edc5b8e4f5fec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/437ab07b79c271288dcd7579d25188231864983bf937bbe0a11edc5b8e4f5fec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:20 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:20 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:20 compute-0 podman[75748]: 2026-01-20 19:02:20.663559248 +0000 UTC m=+0.499215321 container init 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:02:20 compute-0 podman[75748]: 2026-01-20 19:02:20.669068293 +0000 UTC m=+0.504724346 container start 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:20 compute-0 podman[75748]: 2026-01-20 19:02:20.673003373 +0000 UTC m=+0.508659456 container attach 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Jan 20 19:02:21 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/24894714' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:21 compute-0 reverent_wiles[75765]: 
Jan 20 19:02:21 compute-0 reverent_wiles[75765]: {
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "health": {
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "status": "HEALTH_OK",
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "checks": {},
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "mutes": []
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     },
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "election_epoch": 5,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "quorum": [
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         0
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     ],
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "quorum_names": [
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "compute-0"
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     ],
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "quorum_age": 13,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "monmap": {
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "epoch": 1,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "min_mon_release_name": "tentacle",
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_mons": 1
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     },
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "osdmap": {
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "epoch": 1,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_osds": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_up_osds": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "osd_up_since": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_in_osds": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "osd_in_since": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_remapped_pgs": 0
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     },
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "pgmap": {
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "pgs_by_state": [],
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_pgs": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_pools": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_objects": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "data_bytes": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "bytes_used": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "bytes_avail": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "bytes_total": 0
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     },
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "fsmap": {
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "epoch": 1,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "btime": "2026-01-20T19:02:04:930609+0000",
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "by_rank": [],
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "up:standby": 0
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     },
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "mgrmap": {
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "available": true,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "num_standbys": 0,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "modules": [
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:             "iostat",
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:             "nfs"
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         ],
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "services": {}
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     },
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "servicemap": {
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "epoch": 1,
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "modified": "2026-01-20T19:02:04.932596+0000",
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:         "services": {}
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     },
Jan 20 19:02:21 compute-0 reverent_wiles[75765]:     "progress_events": {}
Jan 20 19:02:21 compute-0 reverent_wiles[75765]: }
Jan 20 19:02:21 compute-0 systemd[1]: libpod-570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1.scope: Deactivated successfully.
Jan 20 19:02:21 compute-0 podman[75748]: 2026-01-20 19:02:21.176456735 +0000 UTC m=+1.012112798 container died 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 19:02:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-437ab07b79c271288dcd7579d25188231864983bf937bbe0a11edc5b8e4f5fec-merged.mount: Deactivated successfully.
Jan 20 19:02:21 compute-0 podman[75748]: 2026-01-20 19:02:21.348077187 +0000 UTC m=+1.183733240 container remove 570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1 (image=quay.io/ceph/ceph:v20, name=reverent_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:21 compute-0 systemd[1]: libpod-conmon-570c52ee18d3b28b101cfb49b5cd81417b90b67c7d620e16be43c643921e69f1.scope: Deactivated successfully.
Jan 20 19:02:21 compute-0 podman[75805]: 2026-01-20 19:02:21.410452655 +0000 UTC m=+0.042885692 container create eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:02:21 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/24894714' entity='client.admin' cmd={"prefix": "status", "format": "json-pretty"} : dispatch
Jan 20 19:02:21 compute-0 systemd[1]: Started libpod-conmon-eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0.scope.
Jan 20 19:02:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:21 compute-0 podman[75805]: 2026-01-20 19:02:21.472153662 +0000 UTC m=+0.104586709 container init eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:21 compute-0 podman[75805]: 2026-01-20 19:02:21.476559124 +0000 UTC m=+0.108992151 container start eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:21 compute-0 podman[75805]: 2026-01-20 19:02:21.479598 +0000 UTC m=+0.112031027 container attach eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 19:02:21 compute-0 podman[75805]: 2026-01-20 19:02:21.389660146 +0000 UTC m=+0.022093233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 19:02:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/612880660' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 19:02:21 compute-0 elastic_carver[75821]: 
Jan 20 19:02:21 compute-0 elastic_carver[75821]: [global]
Jan 20 19:02:21 compute-0 elastic_carver[75821]:         fsid = 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:21 compute-0 elastic_carver[75821]:         mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Jan 20 19:02:21 compute-0 elastic_carver[75821]:         osd_crush_chooseleaf_type = 0
Jan 20 19:02:21 compute-0 systemd[1]: libpod-eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0.scope: Deactivated successfully.
Jan 20 19:02:21 compute-0 podman[75805]: 2026-01-20 19:02:21.999610439 +0000 UTC m=+0.632043466 container died eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:02:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-cd672c4a79242e9a9eaf3c7bb15e941f9c167cec467e5699c591244fd2754873-merged.mount: Deactivated successfully.
Jan 20 19:02:22 compute-0 podman[75805]: 2026-01-20 19:02:22.312844648 +0000 UTC m=+0.945277675 container remove eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0 (image=quay.io/ceph/ceph:v20, name=elastic_carver, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:02:22 compute-0 systemd[1]: libpod-conmon-eba1371682f3330a98845f48f0405dc88a3309bedce571c918fec7dee4a4e6c0.scope: Deactivated successfully.
Jan 20 19:02:22 compute-0 podman[75860]: 2026-01-20 19:02:22.383113736 +0000 UTC m=+0.048896852 container create b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:22 compute-0 systemd[1]: Started libpod-conmon-b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef.scope.
Jan 20 19:02:22 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/612880660' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 19:02:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef0a0d116e94604d16f9178c04a0381e210c559ac7fdf869ba022e99dea4ba8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef0a0d116e94604d16f9178c04a0381e210c559ac7fdf869ba022e99dea4ba8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef0a0d116e94604d16f9178c04a0381e210c559ac7fdf869ba022e99dea4ba8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:22 compute-0 podman[75860]: 2026-01-20 19:02:22.445212981 +0000 UTC m=+0.110996117 container init b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:22 compute-0 podman[75860]: 2026-01-20 19:02:22.450523006 +0000 UTC m=+0.116306122 container start b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:02:22 compute-0 podman[75860]: 2026-01-20 19:02:22.361072276 +0000 UTC m=+0.026855442 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:22 compute-0 podman[75860]: 2026-01-20 19:02:22.45412615 +0000 UTC m=+0.119909286 container attach b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:22 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:22 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Jan 20 19:02:22 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1972552445' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 20 19:02:23 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1972552445' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "cephadm"} : dispatch
Jan 20 19:02:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1972552445' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr handle_mgr_map respawning because set of enabled modules changed!
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  e: '/usr/bin/ceph-mgr'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  0: '/usr/bin/ceph-mgr'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  1: '-n'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  2: 'mgr.compute-0.meyjbf'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  3: '-f'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  4: '--setuser'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  5: 'ceph'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  6: '--setgroup'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  7: 'ceph'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  8: '--default-log-to-file=false'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  9: '--default-log-to-journald=true'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  10: '--default-log-to-stderr=false'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr respawn  exe_path /proc/self/exe
Jan 20 19:02:23 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.meyjbf(active, since 6s)
Jan 20 19:02:23 compute-0 systemd[1]: libpod-b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef.scope: Deactivated successfully.
Jan 20 19:02:23 compute-0 podman[75860]: 2026-01-20 19:02:23.57885192 +0000 UTC m=+1.244635046 container died b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:02:23 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: ignoring --setuser ceph since I am not root
Jan 20 19:02:23 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: ignoring --setgroup ceph since I am not root
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: pidfile_write: ignore empty --pid-file
Jan 20 19:02:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ef0a0d116e94604d16f9178c04a0381e210c559ac7fdf869ba022e99dea4ba8-merged.mount: Deactivated successfully.
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'alerts'
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'balancer'
Jan 20 19:02:23 compute-0 podman[75860]: 2026-01-20 19:02:23.797706021 +0000 UTC m=+1.463489137 container remove b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef (image=quay.io/ceph/ceph:v20, name=sad_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:02:23 compute-0 systemd[1]: libpod-conmon-b900e28bea133f3a8fad25df6923bc3add3c93d10648d10bb8d3109a5a30f6ef.scope: Deactivated successfully.
Jan 20 19:02:23 compute-0 podman[75935]: 2026-01-20 19:02:23.863726946 +0000 UTC m=+0.043324094 container create 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:23 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'cephadm'
Jan 20 19:02:24 compute-0 podman[75935]: 2026-01-20 19:02:23.845683319 +0000 UTC m=+0.025280497 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:25 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1972552445' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Jan 20 19:02:25 compute-0 ceph-mon[75120]: mgrmap e5: compute-0.meyjbf(active, since 6s)
Jan 20 19:02:25 compute-0 systemd[1]: Started libpod-conmon-396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129.scope.
Jan 20 19:02:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b138406dbd280c5800200e171fdbe747b01bdc470d1d2e204575e6444bca73a0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b138406dbd280c5800200e171fdbe747b01bdc470d1d2e204575e6444bca73a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b138406dbd280c5800200e171fdbe747b01bdc470d1d2e204575e6444bca73a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:25 compute-0 podman[75935]: 2026-01-20 19:02:25.171077566 +0000 UTC m=+1.350674734 container init 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:02:25 compute-0 podman[75935]: 2026-01-20 19:02:25.176394912 +0000 UTC m=+1.355992060 container start 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:25 compute-0 podman[75935]: 2026-01-20 19:02:25.179635387 +0000 UTC m=+1.359232565 container attach 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:02:25 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'crash'
Jan 20 19:02:25 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'dashboard'
Jan 20 19:02:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 20 19:02:25 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3314750318' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 20 19:02:25 compute-0 relaxed_robinson[75962]: {
Jan 20 19:02:25 compute-0 relaxed_robinson[75962]:     "epoch": 5,
Jan 20 19:02:25 compute-0 relaxed_robinson[75962]:     "available": true,
Jan 20 19:02:25 compute-0 relaxed_robinson[75962]:     "active_name": "compute-0.meyjbf",
Jan 20 19:02:25 compute-0 relaxed_robinson[75962]:     "num_standby": 0
Jan 20 19:02:25 compute-0 relaxed_robinson[75962]: }
Jan 20 19:02:25 compute-0 systemd[1]: libpod-396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129.scope: Deactivated successfully.
Jan 20 19:02:25 compute-0 podman[75988]: 2026-01-20 19:02:25.695625864 +0000 UTC m=+0.024605254 container died 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 20 19:02:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-b138406dbd280c5800200e171fdbe747b01bdc470d1d2e204575e6444bca73a0-merged.mount: Deactivated successfully.
Jan 20 19:02:25 compute-0 podman[75988]: 2026-01-20 19:02:25.831920886 +0000 UTC m=+0.160900266 container remove 396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129 (image=quay.io/ceph/ceph:v20, name=relaxed_robinson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:25 compute-0 systemd[1]: libpod-conmon-396d9bfdf260c835220e9b29059cca93e6e261c8740b49598ff7fc712b4ae129.scope: Deactivated successfully.
Jan 20 19:02:25 compute-0 podman[76003]: 2026-01-20 19:02:25.87818483 +0000 UTC m=+0.023323021 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:26 compute-0 podman[76003]: 2026-01-20 19:02:26.179478835 +0000 UTC m=+0.324617026 container create 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:26 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3314750318' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 20 19:02:26 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'devicehealth'
Jan 20 19:02:26 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 19:02:26 compute-0 systemd[1]: Started libpod-conmon-8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa.scope.
Jan 20 19:02:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a528da0c7303622b842d8db3452cb10c05278aca2acce2a50015d73e4c2ad1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a528da0c7303622b842d8db3452cb10c05278aca2acce2a50015d73e4c2ad1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a528da0c7303622b842d8db3452cb10c05278aca2acce2a50015d73e4c2ad1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:26 compute-0 podman[76003]: 2026-01-20 19:02:26.314471885 +0000 UTC m=+0.459610086 container init 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:26 compute-0 podman[76003]: 2026-01-20 19:02:26.319381111 +0000 UTC m=+0.464519282 container start 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:26 compute-0 podman[76003]: 2026-01-20 19:02:26.32498443 +0000 UTC m=+0.470122641 container attach 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:26 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 19:02:26 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 19:02:26 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]:   from numpy import show_config as show_numpy_config
Jan 20 19:02:26 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'influx'
Jan 20 19:02:26 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'insights'
Jan 20 19:02:26 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'iostat'
Jan 20 19:02:26 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'k8sevents'
Jan 20 19:02:27 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'localpool'
Jan 20 19:02:27 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 19:02:27 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'mirroring'
Jan 20 19:02:27 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'nfs'
Jan 20 19:02:27 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'orchestrator'
Jan 20 19:02:27 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 19:02:28 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'osd_support'
Jan 20 19:02:28 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 19:02:28 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'progress'
Jan 20 19:02:28 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'prometheus'
Jan 20 19:02:28 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'rbd_support'
Jan 20 19:02:28 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'rgw'
Jan 20 19:02:28 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'rook'
Jan 20 19:02:29 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'selftest'
Jan 20 19:02:29 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'smb'
Jan 20 19:02:29 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'snap_schedule'
Jan 20 19:02:30 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'stats'
Jan 20 19:02:30 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'status'
Jan 20 19:02:30 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'telegraf'
Jan 20 19:02:30 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'telemetry'
Jan 20 19:02:30 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 19:02:30 compute-0 ceph-mgr[75417]: mgr[py] Loading python module 'volumes'
Jan 20 19:02:30 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Active manager daemon compute-0.meyjbf restarted
Jan 20 19:02:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Jan 20 19:02:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:02:30 compute-0 ceph-mgr[75417]: ms_deliver_dispatch: unhandled message 0x558c53b6a000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 19:02:30 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.meyjbf
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.2 inc ratio 0.4 full ratio 0.4
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Jan 20 19:02:31 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: mgr handle_mgr_map Activating!
Jan 20 19:02:31 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.meyjbf(active, starting, since 0.526527s)
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: mgr handle_mgr_map I am now activating
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 19:02:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} v 0)
Jan 20 19:02:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Jan 20 19:02:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e1 all = 1
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 19:02:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Jan 20 19:02:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: balancer
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Starting
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:02:31
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:02:31 compute-0 ceph-mgr[75417]: [balancer INFO root] No pools available
Jan 20 19:02:31 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Manager daemon compute-0.meyjbf is now available
Jan 20 19:02:31 compute-0 ceph-mon[75120]: Active manager daemon compute-0.meyjbf restarted
Jan 20 19:02:31 compute-0 ceph-mon[75120]: Activating manager daemon compute-0.meyjbf
Jan 20 19:02:31 compute-0 ceph-mon[75120]: osdmap e2: 0 total, 0 up, 0 in
Jan 20 19:02:31 compute-0 ceph-mon[75120]: mgrmap e6: compute-0.meyjbf(active, starting, since 0.526527s)
Jan 20 19:02:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.meyjbf", "id": "compute-0.meyjbf"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata"} : dispatch
Jan 20 19:02:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata"} : dispatch
Jan 20 19:02:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019908960 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:02:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.cephadm_root_ca_cert}] v 0)
Jan 20 19:02:33 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.meyjbf(active, since 3s)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14128 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Jan 20 19:02:34 compute-0 interesting_engelbart[76019]: {
Jan 20 19:02:34 compute-0 interesting_engelbart[76019]:     "mgrmap_epoch": 7,
Jan 20 19:02:34 compute-0 interesting_engelbart[76019]:     "initialized": true
Jan 20 19:02:34 compute-0 interesting_engelbart[76019]: }
Jan 20 19:02:34 compute-0 systemd[1]: libpod-8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa.scope: Deactivated successfully.
Jan 20 19:02:34 compute-0 podman[76003]: 2026-01-20 19:02:34.042710178 +0000 UTC m=+8.187848409 container died 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:02:34 compute-0 ceph-mon[75120]: Manager daemon compute-0.meyjbf is now available
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.cephadm_root_ca_key}] v 0)
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Jan 20 19:02:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-67a528da0c7303622b842d8db3452cb10c05278aca2acce2a50015d73e4c2ad1-merged.mount: Deactivated successfully.
Jan 20 19:02:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: cephadm
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: crash
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: devicehealth
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [devicehealth INFO root] Starting
Jan 20 19:02:34 compute-0 podman[76003]: 2026-01-20 19:02:34.431997763 +0000 UTC m=+8.577135934 container remove 8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa (image=quay.io/ceph/ceph:v20, name=interesting_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: iostat
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: nfs
Jan 20 19:02:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: orchestrator
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: pg_autoscaler
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: progress
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [progress INFO root] Loading...
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [progress INFO root] No stored events to load
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [progress INFO root] Loaded [] historic events
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [progress INFO root] Loaded OSDMap, ready.
Jan 20 19:02:34 compute-0 systemd[1]: libpod-conmon-8b2fc4716ede29e14f361c84fc66d1b90bfad13bee4a73fdaf329526938fb1aa.scope: Deactivated successfully.
Jan 20 19:02:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] recovery thread starting
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] starting setup
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: rbd_support
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: status
Jan 20 19:02:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} v 0)
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} : dispatch
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: telemetry
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] PerfHandler: starting
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TaskHandler: starting
Jan 20 19:02:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} v 0)
Jan 20 19:02:34 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} : dispatch
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] setup complete
Jan 20 19:02:34 compute-0 ceph-mgr[75417]: mgr load Constructed class from module: volumes
Jan 20 19:02:34 compute-0 podman[76119]: 2026-01-20 19:02:34.490690715 +0000 UTC m=+0.040097418 container create 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:34 compute-0 systemd[1]: Started libpod-conmon-3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533.scope.
Jan 20 19:02:34 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003079fc76c17ba5b9dae0c5c4fdd44c0a1af6e391fb2788f970b7cfd9399a49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003079fc76c17ba5b9dae0c5c4fdd44c0a1af6e391fb2788f970b7cfd9399a49/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/003079fc76c17ba5b9dae0c5c4fdd44c0a1af6e391fb2788f970b7cfd9399a49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:34 compute-0 podman[76119]: 2026-01-20 19:02:34.473501299 +0000 UTC m=+0.022908032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:34 compute-0 podman[76119]: 2026-01-20 19:02:34.572604803 +0000 UTC m=+0.122011516 container init 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:34 compute-0 podman[76119]: 2026-01-20 19:02:34.58023255 +0000 UTC m=+0.129639253 container start 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:34 compute-0 podman[76119]: 2026-01-20 19:02:34.583411183 +0000 UTC m=+0.132817886 container attach 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "orchestrator"} v 0)
Jan 20 19:02:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/817075272' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Bus STARTING
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Bus STARTING
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Serving on http://192.168.122.100:8765
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Serving on http://192.168.122.100:8765
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Serving on https://192.168.122.100:7150
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Serving on https://192.168.122.100:7150
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Bus STARTED
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Bus STARTED
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: [cephadm INFO cherrypy.error] [20/Jan/2026:19:02:35] ENGINE Client ('192.168.122.100', 59146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 19:02:35 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : [20/Jan/2026:19:02:35] ENGINE Client ('192.168.122.100', 59146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 19:02:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 19:02:36 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:36 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:36 compute-0 ceph-mon[75120]: mgrmap e7: compute-0.meyjbf(active, since 3s)
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:36 compute-0 ceph-mon[75120]: Found migration_current of "None". Setting to last migration.
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/mirror_snapshot_schedule"} : dispatch
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.meyjbf/trash_purge_schedule"} : dispatch
Jan 20 19:02:36 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/817075272' entity='client.admin' cmd={"prefix": "mgr module enable", "module": "orchestrator"} : dispatch
Jan 20 19:02:36 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/817075272' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 20 19:02:36 compute-0 hopeful_aryabhata[76186]: module 'orchestrator' is already enabled (always-on)
Jan 20 19:02:36 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.meyjbf(active, since 5s)
Jan 20 19:02:36 compute-0 systemd[1]: libpod-3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533.scope: Deactivated successfully.
Jan 20 19:02:36 compute-0 podman[76119]: 2026-01-20 19:02:36.757805926 +0000 UTC m=+2.307212629 container died 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-003079fc76c17ba5b9dae0c5c4fdd44c0a1af6e391fb2788f970b7cfd9399a49-merged.mount: Deactivated successfully.
Jan 20 19:02:36 compute-0 podman[76119]: 2026-01-20 19:02:36.816287558 +0000 UTC m=+2.365694261 container remove 3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533 (image=quay.io/ceph/ceph:v20, name=hopeful_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:36 compute-0 systemd[1]: libpod-conmon-3fbaafd270b00a060d88ea95c601f5919af931b97c686e9d0659dfa8a7e37533.scope: Deactivated successfully.
Jan 20 19:02:36 compute-0 podman[76244]: 2026-01-20 19:02:36.873494658 +0000 UTC m=+0.037637230 container create cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 19:02:36 compute-0 systemd[1]: Started libpod-conmon-cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224.scope.
Jan 20 19:02:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a76f276f8bdc57758e5c4d5582bdcef9a90f451c3d747f767bcecb74bf9b90d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a76f276f8bdc57758e5c4d5582bdcef9a90f451c3d747f767bcecb74bf9b90d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a76f276f8bdc57758e5c4d5582bdcef9a90f451c3d747f767bcecb74bf9b90d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:36 compute-0 podman[76244]: 2026-01-20 19:02:36.928872881 +0000 UTC m=+0.093015483 container init cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:36 compute-0 podman[76244]: 2026-01-20 19:02:36.933373667 +0000 UTC m=+0.097516239 container start cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:02:36 compute-0 podman[76244]: 2026-01-20 19:02:36.936631323 +0000 UTC m=+0.100773915 container attach cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:02:36 compute-0 podman[76244]: 2026-01-20 19:02:36.85564216 +0000 UTC m=+0.019784752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052667 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:02:37 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Jan 20 19:02:37 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:37 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:37 compute-0 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Bus STARTING
Jan 20 19:02:37 compute-0 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Serving on http://192.168.122.100:8765
Jan 20 19:02:37 compute-0 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Serving on https://192.168.122.100:7150
Jan 20 19:02:37 compute-0 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Bus STARTED
Jan 20 19:02:37 compute-0 ceph-mon[75120]: [20/Jan/2026:19:02:35] ENGINE Client ('192.168.122.100', 59146) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Jan 20 19:02:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:37 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/817075272' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "orchestrator"}]': finished
Jan 20 19:02:37 compute-0 ceph-mon[75120]: mgrmap e8: compute-0.meyjbf(active, since 5s)
Jan 20 19:02:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 19:02:37 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:37 compute-0 systemd[1]: libpod-cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224.scope: Deactivated successfully.
Jan 20 19:02:37 compute-0 podman[76244]: 2026-01-20 19:02:37.816154746 +0000 UTC m=+0.980297328 container died cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:02:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-8a76f276f8bdc57758e5c4d5582bdcef9a90f451c3d747f767bcecb74bf9b90d-merged.mount: Deactivated successfully.
Jan 20 19:02:37 compute-0 podman[76244]: 2026-01-20 19:02:37.905955414 +0000 UTC m=+1.070097986 container remove cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224 (image=quay.io/ceph/ceph:v20, name=vigorous_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:37 compute-0 systemd[1]: libpod-conmon-cebec889b1c48f7e5f01b6eb812d59c616e2a56dee18da59db3b69ff7d76d224.scope: Deactivated successfully.
Jan 20 19:02:37 compute-0 podman[76300]: 2026-01-20 19:02:37.969694248 +0000 UTC m=+0.043239080 container create 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:38 compute-0 systemd[1]: Started libpod-conmon-341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672.scope.
Jan 20 19:02:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c742a7f3bff75b68f58cdc70fe115d758c7f9381f6a4c18cd72212694babf78e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c742a7f3bff75b68f58cdc70fe115d758c7f9381f6a4c18cd72212694babf78e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c742a7f3bff75b68f58cdc70fe115d758c7f9381f6a4c18cd72212694babf78e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:38 compute-0 podman[76300]: 2026-01-20 19:02:37.950105586 +0000 UTC m=+0.023650468 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:38 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:38 compute-0 podman[76300]: 2026-01-20 19:02:38.542335017 +0000 UTC m=+0.615879899 container init 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:02:38 compute-0 podman[76300]: 2026-01-20 19:02:38.552741047 +0000 UTC m=+0.626285879 container start 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:02:38 compute-0 podman[76300]: 2026-01-20 19:02:38.557738168 +0000 UTC m=+0.631283050 container attach 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True)
Jan 20 19:02:38 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Jan 20 19:02:39 compute-0 ceph-mon[75120]: from='client.14138 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:39 compute-0 ceph-mgr[75417]: [cephadm INFO root] Set ssh ssh_user
Jan 20 19:02:39 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Jan 20 19:02:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Jan 20 19:02:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:39 compute-0 ceph-mgr[75417]: [cephadm INFO root] Set ssh ssh_config
Jan 20 19:02:39 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Jan 20 19:02:39 compute-0 ceph-mgr[75417]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Jan 20 19:02:39 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Jan 20 19:02:39 compute-0 jovial_curran[76316]: ssh user set to ceph-admin. sudo will be used
Jan 20 19:02:39 compute-0 systemd[1]: libpod-341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672.scope: Deactivated successfully.
Jan 20 19:02:39 compute-0 podman[76300]: 2026-01-20 19:02:39.053959503 +0000 UTC m=+1.127504335 container died 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-c742a7f3bff75b68f58cdc70fe115d758c7f9381f6a4c18cd72212694babf78e-merged.mount: Deactivated successfully.
Jan 20 19:02:39 compute-0 podman[76300]: 2026-01-20 19:02:39.094628788 +0000 UTC m=+1.168173620 container remove 341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672 (image=quay.io/ceph/ceph:v20, name=jovial_curran, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:39 compute-0 systemd[1]: libpod-conmon-341dbb5cd15abb227c620075a419467b49c895cd49332c02886fc058dfd4b672.scope: Deactivated successfully.
Jan 20 19:02:39 compute-0 podman[76355]: 2026-01-20 19:02:39.139084856 +0000 UTC m=+0.026153098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:39 compute-0 podman[76355]: 2026-01-20 19:02:39.461624482 +0000 UTC m=+0.348692694 container create fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:39 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:39 compute-0 systemd[1]: Started libpod-conmon-fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c.scope.
Jan 20 19:02:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:40 compute-0 podman[76355]: 2026-01-20 19:02:40.217850758 +0000 UTC m=+1.104919060 container init fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:02:40 compute-0 ceph-mon[75120]: from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:40 compute-0 ceph-mon[75120]: Set ssh ssh_user
Jan 20 19:02:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:40 compute-0 ceph-mon[75120]: Set ssh ssh_config
Jan 20 19:02:40 compute-0 ceph-mon[75120]: ssh user set to ceph-admin. sudo will be used
Jan 20 19:02:40 compute-0 podman[76355]: 2026-01-20 19:02:40.230100796 +0000 UTC m=+1.117169048 container start fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:02:40 compute-0 podman[76355]: 2026-01-20 19:02:40.248623187 +0000 UTC m=+1.135691469 container attach fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:02:40 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:40 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Jan 20 19:02:40 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:40 compute-0 ceph-mgr[75417]: [cephadm INFO root] Set ssh ssh_identity_key
Jan 20 19:02:40 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Jan 20 19:02:40 compute-0 ceph-mgr[75417]: [cephadm INFO root] Set ssh private key
Jan 20 19:02:40 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh private key
Jan 20 19:02:40 compute-0 systemd[1]: libpod-fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c.scope: Deactivated successfully.
Jan 20 19:02:40 compute-0 podman[76355]: 2026-01-20 19:02:40.675836395 +0000 UTC m=+1.562904607 container died fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f1b34f52ec97437ea268ad3f0ad254b423c25cb8a44f816bf29dfe9bc3d354d-merged.mount: Deactivated successfully.
Jan 20 19:02:40 compute-0 podman[76355]: 2026-01-20 19:02:40.719523165 +0000 UTC m=+1.606591377 container remove fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c (image=quay.io/ceph/ceph:v20, name=keen_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True)
Jan 20 19:02:40 compute-0 systemd[1]: libpod-conmon-fc1081e8ca1557b0792c8591d53b9a4034c1f8191a3994cdcb4633b7f050fb7c.scope: Deactivated successfully.
Jan 20 19:02:40 compute-0 podman[76409]: 2026-01-20 19:02:40.774847304 +0000 UTC m=+0.037029770 container create b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:40 compute-0 systemd[1]: Started libpod-conmon-b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b.scope.
Jan 20 19:02:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:40 compute-0 podman[76409]: 2026-01-20 19:02:40.847100318 +0000 UTC m=+0.109282794 container init b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 20 19:02:40 compute-0 podman[76409]: 2026-01-20 19:02:40.852155622 +0000 UTC m=+0.114338088 container start b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:40 compute-0 podman[76409]: 2026-01-20 19:02:40.759044815 +0000 UTC m=+0.021227301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:40 compute-0 podman[76409]: 2026-01-20 19:02:40.85587846 +0000 UTC m=+0.118060936 container attach b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:02:41 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Jan 20 19:02:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:41 compute-0 ceph-mgr[75417]: [cephadm INFO root] Set ssh ssh_identity_pub
Jan 20 19:02:41 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Jan 20 19:02:41 compute-0 systemd[1]: libpod-b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b.scope: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[76409]: 2026-01-20 19:02:41.261839967 +0000 UTC m=+0.524022453 container died b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-1d54b9ec3f609a781152ee4e19d59b25ab3c181263f990976b4c23efd4a5a6de-merged.mount: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[76409]: 2026-01-20 19:02:41.295413331 +0000 UTC m=+0.557595797 container remove b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b (image=quay.io/ceph/ceph:v20, name=busy_lalande, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:41 compute-0 systemd[1]: libpod-conmon-b622bd4f30d0275bfb9734e965c758231afcf9bb12deb91a2f57d7227e33662b.scope: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[76464]: 2026-01-20 19:02:41.3629888 +0000 UTC m=+0.049918231 container create b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 19:02:41 compute-0 systemd[1]: Started libpod-conmon-b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede.scope.
Jan 20 19:02:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2effd839d83fe20d49dbbc52c29f34d23b159b0fd8dee8d432c4b085280ab387/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2effd839d83fe20d49dbbc52c29f34d23b159b0fd8dee8d432c4b085280ab387/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2effd839d83fe20d49dbbc52c29f34d23b159b0fd8dee8d432c4b085280ab387/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 podman[76464]: 2026-01-20 19:02:41.428988303 +0000 UTC m=+0.115917744 container init b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 19:02:41 compute-0 podman[76464]: 2026-01-20 19:02:41.336057965 +0000 UTC m=+0.022987486 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:41 compute-0 podman[76464]: 2026-01-20 19:02:41.433411946 +0000 UTC m=+0.120341387 container start b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:41 compute-0 podman[76464]: 2026-01-20 19:02:41.438024357 +0000 UTC m=+0.124953798 container attach b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:02:41 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:41 compute-0 ceph-mon[75120]: from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:41 compute-0 ceph-mon[75120]: Set ssh ssh_identity_key
Jan 20 19:02:41 compute-0 ceph-mon[75120]: Set ssh private key
Jan 20 19:02:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:41 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:41 compute-0 vigorous_banach[76480]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDRdjTUKzSdmKvb3VwE6HE/qbW6hBZuJGSGDa7vwZvuK+uZHe8W/4BziBmg9gcZ6u6FDNHkIMinQJNsQBSP2Ak5KZdiDPCHcM6W7/ajdmqThMfxESSt/3LoU0t7kmc/lAU7NXy70cc05z46Oe9LtwVu+tM8CDfI3vKJHrr5jaHgmTiHQSMuWuPz2ERtV8lTVZyy3CTKXmg/fWNfbcr7T8Gtbkkx/pzgjxy5loaPKzQWZXjVg+Jvxcpyl2uL6a7k/xmW3uRoKLCujuI1GPj4sbFGShG3DT8vVNhqla0rmF6/ltXz9fFMUoVfpoCdQqdeMrBi2JTyITWTqiH2HZETIUygLFC1VJZUfEIEFpSQFpCMNA8kFH2qzJkxd2ynLCUwWGCEVK//8ye5jmGGtOwSrP2ABF0V8zuwA4Qv56RT0uKq4cy0tTIPNrF9q/t0TbAg6bkg/ziEkFc49CYuzPg0MYpWGiIp1RuH4DpLgPrby4mpruxKDOTqe7BLGnES0JT5VJE= zuul@controller
Jan 20 19:02:41 compute-0 systemd[1]: libpod-b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede.scope: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[76464]: 2026-01-20 19:02:41.827561744 +0000 UTC m=+0.514491225 container died b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-2effd839d83fe20d49dbbc52c29f34d23b159b0fd8dee8d432c4b085280ab387-merged.mount: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[76464]: 2026-01-20 19:02:41.872052483 +0000 UTC m=+0.558981934 container remove b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede (image=quay.io/ceph/ceph:v20, name=vigorous_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:41 compute-0 systemd[1]: libpod-conmon-b73c58f8b2f6224666a49cd155185d23682016121c3696cedde9df7a50e6dede.scope: Deactivated successfully.
Jan 20 19:02:41 compute-0 podman[76518]: 2026-01-20 19:02:41.929171159 +0000 UTC m=+0.037018621 container create 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:41 compute-0 systemd[1]: Started libpod-conmon-89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735.scope.
Jan 20 19:02:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d48f8543f6e0aa548083acf2fd7af041e85fd0fca6522da08c053ef1110677/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d48f8543f6e0aa548083acf2fd7af041e85fd0fca6522da08c053ef1110677/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/45d48f8543f6e0aa548083acf2fd7af041e85fd0fca6522da08c053ef1110677/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:41 compute-0 podman[76518]: 2026-01-20 19:02:41.990457385 +0000 UTC m=+0.098304877 container init 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:02:41 compute-0 podman[76518]: 2026-01-20 19:02:41.994725081 +0000 UTC m=+0.102572543 container start 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 19:02:42 compute-0 podman[76518]: 2026-01-20 19:02:42.000015394 +0000 UTC m=+0.107862866 container attach 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:02:42 compute-0 podman[76518]: 2026-01-20 19:02:41.912264366 +0000 UTC m=+0.020111848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054702 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:02:42 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:42 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:42 compute-0 sshd-session[76560]: Accepted publickey for ceph-admin from 192.168.122.100 port 52018 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:42 compute-0 systemd[1]: Created slice User Slice of UID 42477.
Jan 20 19:02:42 compute-0 systemd[1]: Starting User Runtime Directory /run/user/42477...
Jan 20 19:02:42 compute-0 systemd-logind[797]: New session 21 of user ceph-admin.
Jan 20 19:02:42 compute-0 systemd[1]: Finished User Runtime Directory /run/user/42477.
Jan 20 19:02:42 compute-0 systemd[1]: Starting User Manager for UID 42477...
Jan 20 19:02:42 compute-0 ceph-mon[75120]: from='client.14144 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:42 compute-0 ceph-mon[75120]: Set ssh ssh_identity_pub
Jan 20 19:02:42 compute-0 systemd[76564]: pam_unix(systemd-user:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:42 compute-0 systemd[76564]: Queued start job for default target Main User Target.
Jan 20 19:02:42 compute-0 systemd[76564]: Created slice User Application Slice.
Jan 20 19:02:42 compute-0 systemd[76564]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 20 19:02:42 compute-0 systemd[76564]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 19:02:42 compute-0 systemd[76564]: Reached target Paths.
Jan 20 19:02:42 compute-0 systemd[76564]: Reached target Timers.
Jan 20 19:02:42 compute-0 systemd[76564]: Starting D-Bus User Message Bus Socket...
Jan 20 19:02:42 compute-0 systemd[76564]: Starting Create User's Volatile Files and Directories...
Jan 20 19:02:42 compute-0 sshd-session[76577]: Accepted publickey for ceph-admin from 192.168.122.100 port 52028 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:42 compute-0 systemd[76564]: Finished Create User's Volatile Files and Directories.
Jan 20 19:02:42 compute-0 systemd[76564]: Listening on D-Bus User Message Bus Socket.
Jan 20 19:02:42 compute-0 systemd[76564]: Reached target Sockets.
Jan 20 19:02:42 compute-0 systemd[76564]: Reached target Basic System.
Jan 20 19:02:42 compute-0 systemd[76564]: Reached target Main User Target.
Jan 20 19:02:42 compute-0 systemd[76564]: Startup finished in 127ms.
Jan 20 19:02:42 compute-0 systemd[1]: Started User Manager for UID 42477.
Jan 20 19:02:42 compute-0 systemd[1]: Started Session 21 of User ceph-admin.
Jan 20 19:02:42 compute-0 systemd-logind[797]: New session 23 of user ceph-admin.
Jan 20 19:02:42 compute-0 systemd[1]: Started Session 23 of User ceph-admin.
Jan 20 19:02:42 compute-0 sshd-session[76560]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:42 compute-0 sshd-session[76577]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:42 compute-0 sudo[76584]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:42 compute-0 sudo[76584]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:42 compute-0 sudo[76584]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:43 compute-0 sshd-session[76609]: Accepted publickey for ceph-admin from 192.168.122.100 port 52034 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:43 compute-0 systemd-logind[797]: New session 24 of user ceph-admin.
Jan 20 19:02:43 compute-0 systemd[1]: Started Session 24 of User ceph-admin.
Jan 20 19:02:43 compute-0 sshd-session[76609]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:43 compute-0 sudo[76613]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 20 19:02:43 compute-0 sudo[76613]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:43 compute-0 sudo[76613]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:43 compute-0 sshd-session[76638]: Accepted publickey for ceph-admin from 192.168.122.100 port 52038 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:43 compute-0 systemd-logind[797]: New session 25 of user ceph-admin.
Jan 20 19:02:43 compute-0 systemd[1]: Started Session 25 of User ceph-admin.
Jan 20 19:02:43 compute-0 sshd-session[76638]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:43 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:43 compute-0 sudo[76642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 20 19:02:43 compute-0 sudo[76642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:43 compute-0 sudo[76642]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:43 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Jan 20 19:02:43 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Jan 20 19:02:43 compute-0 ceph-mon[75120]: from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:43 compute-0 ceph-mon[75120]: from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:43 compute-0 ceph-mon[75120]: Deploying cephadm binary to compute-0
Jan 20 19:02:43 compute-0 sshd-session[76669]: Accepted publickey for ceph-admin from 192.168.122.100 port 52044 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:43 compute-0 systemd-logind[797]: New session 26 of user ceph-admin.
Jan 20 19:02:43 compute-0 systemd[1]: Started Session 26 of User ceph-admin.
Jan 20 19:02:43 compute-0 sshd-session[76669]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:43 compute-0 sudo[76673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:43 compute-0 sudo[76673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:43 compute-0 sudo[76673]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:44 compute-0 sshd-session[76698]: Accepted publickey for ceph-admin from 192.168.122.100 port 52046 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:44 compute-0 systemd-logind[797]: New session 27 of user ceph-admin.
Jan 20 19:02:44 compute-0 systemd[1]: Started Session 27 of User ceph-admin.
Jan 20 19:02:44 compute-0 sshd-session[76698]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:44 compute-0 sshd-session[76667]: Invalid user pbanx from 45.148.10.240 port 35854
Jan 20 19:02:44 compute-0 sudo[76702]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:44 compute-0 sudo[76702]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:44 compute-0 sudo[76702]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:44 compute-0 sshd-session[76667]: Connection closed by invalid user pbanx 45.148.10.240 port 35854 [preauth]
Jan 20 19:02:44 compute-0 sshd-session[76727]: Accepted publickey for ceph-admin from 192.168.122.100 port 52062 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:44 compute-0 systemd-logind[797]: New session 28 of user ceph-admin.
Jan 20 19:02:44 compute-0 systemd[1]: Started Session 28 of User ceph-admin.
Jan 20 19:02:44 compute-0 sshd-session[76727]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:44 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:44 compute-0 sudo[76731]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 20 19:02:44 compute-0 sudo[76731]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:44 compute-0 sudo[76731]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:44 compute-0 sshd-session[76756]: Accepted publickey for ceph-admin from 192.168.122.100 port 52078 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:44 compute-0 systemd-logind[797]: New session 29 of user ceph-admin.
Jan 20 19:02:44 compute-0 systemd[1]: Started Session 29 of User ceph-admin.
Jan 20 19:02:44 compute-0 sshd-session[76756]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:44 compute-0 sudo[76760]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:44 compute-0 sudo[76760]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:44 compute-0 sudo[76760]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:45 compute-0 sshd-session[76785]: Accepted publickey for ceph-admin from 192.168.122.100 port 52092 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:45 compute-0 systemd-logind[797]: New session 30 of user ceph-admin.
Jan 20 19:02:45 compute-0 systemd[1]: Started Session 30 of User ceph-admin.
Jan 20 19:02:45 compute-0 sshd-session[76785]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:45 compute-0 sudo[76789]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new
Jan 20 19:02:45 compute-0 sudo[76789]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:45 compute-0 sudo[76789]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:45 compute-0 sshd-session[76814]: Accepted publickey for ceph-admin from 192.168.122.100 port 51464 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:45 compute-0 systemd-logind[797]: New session 31 of user ceph-admin.
Jan 20 19:02:45 compute-0 systemd[1]: Started Session 31 of User ceph-admin.
Jan 20 19:02:45 compute-0 sshd-session[76814]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:45 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:46 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:46 compute-0 sshd-session[76841]: Accepted publickey for ceph-admin from 192.168.122.100 port 51468 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:46 compute-0 systemd-logind[797]: New session 32 of user ceph-admin.
Jan 20 19:02:46 compute-0 systemd[1]: Started Session 32 of User ceph-admin.
Jan 20 19:02:46 compute-0 sshd-session[76841]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:46 compute-0 sudo[76845]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b.new /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b
Jan 20 19:02:46 compute-0 sudo[76845]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:46 compute-0 sudo[76845]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:47 compute-0 sshd-session[76870]: Accepted publickey for ceph-admin from 192.168.122.100 port 51480 ssh2: RSA SHA256:tgdMe1+saQYML2hq9kkcwTKdUjmuSg6pBjUR7C4bOQs
Jan 20 19:02:47 compute-0 systemd-logind[797]: New session 33 of user ceph-admin.
Jan 20 19:02:47 compute-0 systemd[1]: Started Session 33 of User ceph-admin.
Jan 20 19:02:47 compute-0 sshd-session[76870]: pam_unix(sshd:session): session opened for user ceph-admin(uid=42477) by ceph-admin(uid=0)
Jan 20 19:02:47 compute-0 sudo[76874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 20 19:02:47 compute-0 sudo[76874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:02:47 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:47 compute-0 sudo[76874]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 19:02:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:47 compute-0 ceph-mgr[75417]: [cephadm INFO root] Added host compute-0
Jan 20 19:02:47 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 20 19:02:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 19:02:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:47 compute-0 objective_pascal[76534]: Added host 'compute-0' with addr '192.168.122.100'
Jan 20 19:02:47 compute-0 systemd[1]: libpod-89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735.scope: Deactivated successfully.
Jan 20 19:02:47 compute-0 podman[76518]: 2026-01-20 19:02:47.601566808 +0000 UTC m=+5.709414280 container died 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:02:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-45d48f8543f6e0aa548083acf2fd7af041e85fd0fca6522da08c053ef1110677-merged.mount: Deactivated successfully.
Jan 20 19:02:47 compute-0 podman[76518]: 2026-01-20 19:02:47.640273409 +0000 UTC m=+5.748120881 container remove 89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735 (image=quay.io/ceph/ceph:v20, name=objective_pascal, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:47 compute-0 sudo[76920]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:47 compute-0 sudo[76920]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:47 compute-0 sudo[76920]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:47 compute-0 systemd[1]: libpod-conmon-89c5f870b647abb63fcbc3a770de90798e8aaddd45a4df1eb6cd827757e35735.scope: Deactivated successfully.
Jan 20 19:02:47 compute-0 sudo[76958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 pull
Jan 20 19:02:47 compute-0 sudo[76958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:47 compute-0 podman[76956]: 2026-01-20 19:02:47.71309927 +0000 UTC m=+0.047177239 container create 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:47 compute-0 systemd[1]: Started libpod-conmon-7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2.scope.
Jan 20 19:02:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:47 compute-0 podman[76956]: 2026-01-20 19:02:47.692598074 +0000 UTC m=+0.026676073 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20063aa07fe9c10ac513b70e7c0e06da618cedf49268171f9a2dbfd1d3498e50/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20063aa07fe9c10ac513b70e7c0e06da618cedf49268171f9a2dbfd1d3498e50/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20063aa07fe9c10ac513b70e7c0e06da618cedf49268171f9a2dbfd1d3498e50/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:47 compute-0 podman[76956]: 2026-01-20 19:02:47.814947306 +0000 UTC m=+0.149025285 container init 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:02:47 compute-0 podman[76956]: 2026-01-20 19:02:47.823233574 +0000 UTC m=+0.157311533 container start 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:02:47 compute-0 podman[76956]: 2026-01-20 19:02:47.826690231 +0000 UTC m=+0.160768350 container attach 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:02:48 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:48 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service mon spec with placement count:5
Jan 20 19:02:48 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Jan 20 19:02:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 19:02:48 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:48 compute-0 hopeful_mahavira[76997]: Scheduled mon update...
Jan 20 19:02:48 compute-0 systemd[1]: libpod-7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2.scope: Deactivated successfully.
Jan 20 19:02:48 compute-0 podman[76956]: 2026-01-20 19:02:48.278650969 +0000 UTC m=+0.612728948 container died 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-20063aa07fe9c10ac513b70e7c0e06da618cedf49268171f9a2dbfd1d3498e50-merged.mount: Deactivated successfully.
Jan 20 19:02:48 compute-0 podman[76956]: 2026-01-20 19:02:48.32213869 +0000 UTC m=+0.656216659 container remove 7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2 (image=quay.io/ceph/ceph:v20, name=hopeful_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:02:48 compute-0 systemd[1]: libpod-conmon-7b64fc6199a5fcfa4537465dec2bfdb150e0ecd491b7b57bc3f23e9837eb20d2.scope: Deactivated successfully.
Jan 20 19:02:48 compute-0 podman[77059]: 2026-01-20 19:02:48.377520702 +0000 UTC m=+0.036133178 container create 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:02:48 compute-0 systemd[1]: Started libpod-conmon-67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b.scope.
Jan 20 19:02:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:48 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eea952879053eba462b19b212056b374f3633ad68c1ae980a793180a4b634f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eea952879053eba462b19b212056b374f3633ad68c1ae980a793180a4b634f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30eea952879053eba462b19b212056b374f3633ad68c1ae980a793180a4b634f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:48 compute-0 podman[77059]: 2026-01-20 19:02:48.453872954 +0000 UTC m=+0.112485460 container init 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:48 compute-0 podman[77059]: 2026-01-20 19:02:48.361164877 +0000 UTC m=+0.019777363 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:48 compute-0 podman[77059]: 2026-01-20 19:02:48.460553255 +0000 UTC m=+0.119165741 container start 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:48 compute-0 podman[77059]: 2026-01-20 19:02:48.464539556 +0000 UTC m=+0.123152072 container attach 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:02:48 compute-0 podman[77033]: 2026-01-20 19:02:48.555026336 +0000 UTC m=+0.583051501 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:48 compute-0 ceph-mon[75120]: Added host compute-0
Jan 20 19:02:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:02:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:48 compute-0 podman[77112]: 2026-01-20 19:02:48.687942677 +0000 UTC m=+0.055455958 container create 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:48 compute-0 systemd[1]: Started libpod-conmon-66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068.scope.
Jan 20 19:02:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:48 compute-0 podman[77112]: 2026-01-20 19:02:48.660194392 +0000 UTC m=+0.027707703 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:48 compute-0 podman[77112]: 2026-01-20 19:02:48.763645716 +0000 UTC m=+0.131158997 container init 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:48 compute-0 podman[77112]: 2026-01-20 19:02:48.768877158 +0000 UTC m=+0.136390429 container start 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 19:02:48 compute-0 podman[77112]: 2026-01-20 19:02:48.772111143 +0000 UTC m=+0.139624504 container attach 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:02:48 compute-0 bold_easley[77128]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable)
Jan 20 19:02:48 compute-0 systemd[1]: libpod-66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068.scope: Deactivated successfully.
Jan 20 19:02:48 compute-0 podman[77112]: 2026-01-20 19:02:48.866312891 +0000 UTC m=+0.233826172 container died 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:02:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-d2464ef4ed56d044c3cf15dae47639aaf697be1ed46199c6e7c95b4b1d52c49b-merged.mount: Deactivated successfully.
Jan 20 19:02:48 compute-0 podman[77112]: 2026-01-20 19:02:48.902411517 +0000 UTC m=+0.269924798 container remove 66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068 (image=quay.io/ceph/ceph:v20, name=bold_easley, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:02:48 compute-0 systemd[1]: libpod-conmon-66262ffb7f85c514d5f70d0db1b6bbc399ceee0060c2b88c0adc272224142068.scope: Deactivated successfully.
Jan 20 19:02:48 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:48 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service mgr spec with placement count:2
Jan 20 19:02:48 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Jan 20 19:02:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 19:02:48 compute-0 sudo[76958]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:48 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:48 compute-0 kind_aryabhata[77075]: Scheduled mgr update...
Jan 20 19:02:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Jan 20 19:02:48 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:48 compute-0 systemd[1]: libpod-67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b.scope: Deactivated successfully.
Jan 20 19:02:48 compute-0 podman[77059]: 2026-01-20 19:02:48.98360764 +0000 UTC m=+0.642220136 container died 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:02:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-30eea952879053eba462b19b212056b374f3633ad68c1ae980a793180a4b634f-merged.mount: Deactivated successfully.
Jan 20 19:02:49 compute-0 podman[77059]: 2026-01-20 19:02:49.030962917 +0000 UTC m=+0.689575423 container remove 67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b (image=quay.io/ceph/ceph:v20, name=kind_aryabhata, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:49 compute-0 systemd[1]: libpod-conmon-67d4533662f7c91fa8716e2b9c7ded9ada8223d641f78f77df7ac793f4794f7b.scope: Deactivated successfully.
Jan 20 19:02:49 compute-0 sudo[77149]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:49 compute-0 sudo[77149]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:49 compute-0 sudo[77149]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:49 compute-0 podman[77181]: 2026-01-20 19:02:49.105351853 +0000 UTC m=+0.052250873 container create 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:49 compute-0 sudo[77194]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 20 19:02:49 compute-0 sudo[77194]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:49 compute-0 systemd[1]: Started libpod-conmon-9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915.scope.
Jan 20 19:02:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:49 compute-0 podman[77181]: 2026-01-20 19:02:49.082516546 +0000 UTC m=+0.029415586 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cd881e9fde525d386c27f00b2f1a25e46e194a198b2afbfb4a02e8d3fcd9c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cd881e9fde525d386c27f00b2f1a25e46e194a198b2afbfb4a02e8d3fcd9c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9cd881e9fde525d386c27f00b2f1a25e46e194a198b2afbfb4a02e8d3fcd9c2/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:49 compute-0 podman[77181]: 2026-01-20 19:02:49.199250007 +0000 UTC m=+0.146149027 container init 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:49 compute-0 podman[77181]: 2026-01-20 19:02:49.205189272 +0000 UTC m=+0.152088292 container start 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 19:02:49 compute-0 podman[77181]: 2026-01-20 19:02:49.223534374 +0000 UTC m=+0.170433534 container attach 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:49 compute-0 sudo[77194]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:02:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:49 compute-0 sudo[77269]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:49 compute-0 sudo[77269]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:49 compute-0 sudo[77269]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:49 compute-0 ceph-mgr[75417]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Jan 20 19:02:49 compute-0 sudo[77294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:02:49 compute-0 sudo[77294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:49 compute-0 ceph-mon[75120]: from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:49 compute-0 ceph-mon[75120]: Saving service mon spec with placement count:5
Jan 20 19:02:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:49 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:49 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service crash spec with placement *
Jan 20 19:02:49 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Jan 20 19:02:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 19:02:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:49 compute-0 happy_almeida[77224]: Scheduled crash update...
Jan 20 19:02:49 compute-0 systemd[1]: libpod-9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915.scope: Deactivated successfully.
Jan 20 19:02:49 compute-0 podman[77181]: 2026-01-20 19:02:49.653078665 +0000 UTC m=+0.599977685 container died 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9cd881e9fde525d386c27f00b2f1a25e46e194a198b2afbfb4a02e8d3fcd9c2-merged.mount: Deactivated successfully.
Jan 20 19:02:49 compute-0 podman[77181]: 2026-01-20 19:02:49.695254922 +0000 UTC m=+0.642153942 container remove 9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915 (image=quay.io/ceph/ceph:v20, name=happy_almeida, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:02:49 compute-0 systemd[1]: libpod-conmon-9e668ff10b402671e5fcbca9da5d43d649008378073d39dbeb79065db199e915.scope: Deactivated successfully.
Jan 20 19:02:49 compute-0 podman[77346]: 2026-01-20 19:02:49.753283232 +0000 UTC m=+0.038444609 container create 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:49 compute-0 systemd[1]: Started libpod-conmon-91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701.scope.
Jan 20 19:02:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d809f214436a9e21b10dc659c6b58a3bc15d2edf6979bf209acb70f23626146c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d809f214436a9e21b10dc659c6b58a3bc15d2edf6979bf209acb70f23626146c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d809f214436a9e21b10dc659c6b58a3bc15d2edf6979bf209acb70f23626146c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:49 compute-0 podman[77346]: 2026-01-20 19:02:49.82062698 +0000 UTC m=+0.105788377 container init 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:49 compute-0 podman[77346]: 2026-01-20 19:02:49.8266568 +0000 UTC m=+0.111818227 container start 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:49 compute-0 podman[77346]: 2026-01-20 19:02:49.735332299 +0000 UTC m=+0.020493696 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:49 compute-0 podman[77346]: 2026-01-20 19:02:49.831347806 +0000 UTC m=+0.116509203 container attach 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:02:49 compute-0 podman[77398]: 2026-01-20 19:02:49.886609102 +0000 UTC m=+0.046433503 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:49 compute-0 podman[77398]: 2026-01-20 19:02:49.980658933 +0000 UTC m=+0.140483314 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:50 compute-0 sudo[77294]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:50 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:02:50 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:50 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Jan 20 19:02:50 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1711333958' entity='client.admin' 
Jan 20 19:02:50 compute-0 systemd[1]: libpod-91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701.scope: Deactivated successfully.
Jan 20 19:02:50 compute-0 podman[77346]: 2026-01-20 19:02:50.244254425 +0000 UTC m=+0.529415802 container died 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:50 compute-0 sudo[77498]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:50 compute-0 sudo[77498]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:50 compute-0 sudo[77498]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-d809f214436a9e21b10dc659c6b58a3bc15d2edf6979bf209acb70f23626146c-merged.mount: Deactivated successfully.
Jan 20 19:02:50 compute-0 podman[77346]: 2026-01-20 19:02:50.276122997 +0000 UTC m=+0.561284374 container remove 91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701 (image=quay.io/ceph/ceph:v20, name=sweet_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 19:02:50 compute-0 systemd[1]: libpod-conmon-91959f1dd4548c0dffafd592917373014c1b2b9970340bc647575d0cd6dd3701.scope: Deactivated successfully.
Jan 20 19:02:50 compute-0 sudo[77531]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:02:50 compute-0 sudo[77531]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:50 compute-0 podman[77555]: 2026-01-20 19:02:50.33629512 +0000 UTC m=+0.038248079 container create 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:50 compute-0 systemd[1]: Started libpod-conmon-3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e.scope.
Jan 20 19:02:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258b1ba0e8e5cab2c1b9edea32894afd2ee7a72b209fa355bd6c57b4bbff93d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258b1ba0e8e5cab2c1b9edea32894afd2ee7a72b209fa355bd6c57b4bbff93d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1258b1ba0e8e5cab2c1b9edea32894afd2ee7a72b209fa355bd6c57b4bbff93d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:50 compute-0 podman[77555]: 2026-01-20 19:02:50.404377003 +0000 UTC m=+0.106329972 container init 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:02:50 compute-0 podman[77555]: 2026-01-20 19:02:50.41012313 +0000 UTC m=+0.112076079 container start 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:02:50 compute-0 podman[77555]: 2026-01-20 19:02:50.41408522 +0000 UTC m=+0.116038169 container attach 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:50 compute-0 podman[77555]: 2026-01-20 19:02:50.320767514 +0000 UTC m=+0.022720483 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:50 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:50 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77597 (sysctl)
Jan 20 19:02:50 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 20 19:02:50 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 20 19:02:50 compute-0 ceph-mon[75120]: from='client.14152 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:50 compute-0 ceph-mon[75120]: Saving service mgr spec with placement count:2
Jan 20 19:02:50 compute-0 ceph-mon[75120]: from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:50 compute-0 ceph-mon[75120]: Saving service crash spec with placement *
Jan 20 19:02:50 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:50 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1711333958' entity='client.admin' 
Jan 20 19:02:50 compute-0 sudo[77531]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:50 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:50 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Jan 20 19:02:50 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:50 compute-0 systemd[1]: libpod-3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e.scope: Deactivated successfully.
Jan 20 19:02:50 compute-0 podman[77641]: 2026-01-20 19:02:50.91267162 +0000 UTC m=+0.020523228 container died 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 19:02:50 compute-0 sudo[77638]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:50 compute-0 sudo[77638]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:50 compute-0 sudo[77638]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:50 compute-0 sudo[77677]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 list-networks
Jan 20 19:02:50 compute-0 sudo[77677]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-1258b1ba0e8e5cab2c1b9edea32894afd2ee7a72b209fa355bd6c57b4bbff93d-merged.mount: Deactivated successfully.
Jan 20 19:02:51 compute-0 podman[77641]: 2026-01-20 19:02:51.058458438 +0000 UTC m=+0.166310066 container remove 3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e (image=quay.io/ceph/ceph:v20, name=thirsty_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:51 compute-0 systemd[1]: libpod-conmon-3dab0c961f233a9a256fc5afa3d293f0ebb7628d4d5cfce21cac0f8966a9704e.scope: Deactivated successfully.
Jan 20 19:02:51 compute-0 podman[77702]: 2026-01-20 19:02:51.179931609 +0000 UTC m=+0.095421599 container create f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:02:51 compute-0 podman[77702]: 2026-01-20 19:02:51.108285934 +0000 UTC m=+0.023775934 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:51 compute-0 systemd[1]: Started libpod-conmon-f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de.scope.
Jan 20 19:02:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932332ec3b5f52d81f958e0b7d353a4c407ff9ee92c0a532a3ac48c95614c26d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932332ec3b5f52d81f958e0b7d353a4c407ff9ee92c0a532a3ac48c95614c26d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/932332ec3b5f52d81f958e0b7d353a4c407ff9ee92c0a532a3ac48c95614c26d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:51 compute-0 sudo[77677]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:51 compute-0 podman[77702]: 2026-01-20 19:02:51.350711379 +0000 UTC m=+0.266201389 container init f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:02:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:02:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:51 compute-0 podman[77702]: 2026-01-20 19:02:51.361172111 +0000 UTC m=+0.276662101 container start f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:02:51 compute-0 podman[77702]: 2026-01-20 19:02:51.367394901 +0000 UTC m=+0.282884891 container attach f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:51 compute-0 sudo[77741]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:51 compute-0 sudo[77741]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:51 compute-0 sudo[77741]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:51 compute-0 sudo[77766]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- inventory --format=json-pretty --filter-for-batch
Jan 20 19:02:51 compute-0 sudo[77766]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:51 compute-0 ceph-mgr[75417]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Jan 20 19:02:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:51 compute-0 ceph-mon[75120]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 20 19:02:51 compute-0 podman[77821]: 2026-01-20 19:02:51.743777825 +0000 UTC m=+0.038372626 container create b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:51 compute-0 systemd[1]: Started libpod-conmon-b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7.scope.
Jan 20 19:02:51 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 19:02:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:51 compute-0 podman[77821]: 2026-01-20 19:02:51.81687721 +0000 UTC m=+0.111472031 container init b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:02:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:51 compute-0 ceph-mgr[75417]: [cephadm INFO root] Added label _admin to host compute-0
Jan 20 19:02:51 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Jan 20 19:02:51 compute-0 dreamy_panini[77728]: Added label _admin to host compute-0
Jan 20 19:02:51 compute-0 podman[77821]: 2026-01-20 19:02:51.725939658 +0000 UTC m=+0.020534469 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:02:51 compute-0 podman[77821]: 2026-01-20 19:02:51.823615393 +0000 UTC m=+0.118210194 container start b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:51 compute-0 condescending_elgamal[77838]: 167 167
Jan 20 19:02:51 compute-0 systemd[1]: libpod-b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7.scope: Deactivated successfully.
Jan 20 19:02:51 compute-0 podman[77821]: 2026-01-20 19:02:51.827560603 +0000 UTC m=+0.122155434 container attach b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:51 compute-0 podman[77821]: 2026-01-20 19:02:51.827832656 +0000 UTC m=+0.122427457 container died b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:51 compute-0 systemd[1]: libpod-f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de.scope: Deactivated successfully.
Jan 20 19:02:51 compute-0 podman[77702]: 2026-01-20 19:02:51.840841512 +0000 UTC m=+0.756331502 container died f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:02:51 compute-0 ceph-mon[75120]: from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:51 compute-0 ceph-mon[75120]: pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:51 compute-0 ceph-mon[75120]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Jan 20 19:02:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-932332ec3b5f52d81f958e0b7d353a4c407ff9ee92c0a532a3ac48c95614c26d-merged.mount: Deactivated successfully.
Jan 20 19:02:52 compute-0 podman[77702]: 2026-01-20 19:02:52.015793062 +0000 UTC m=+0.931283072 container remove f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de (image=quay.io/ceph/ceph:v20, name=dreamy_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:52 compute-0 systemd[1]: libpod-conmon-f9f1027cef9f3dc41bde26b46e52b627c39e127a949300bffc39291dda85e2de.scope: Deactivated successfully.
Jan 20 19:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-df917659f7ccd4c88fc110c9cb18a3ceb1ba4728151a3ec2e2e6df43983519cc-merged.mount: Deactivated successfully.
Jan 20 19:02:52 compute-0 podman[77821]: 2026-01-20 19:02:52.128780023 +0000 UTC m=+0.423374834 container remove b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_elgamal, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:52 compute-0 systemd[1]: libpod-conmon-b745907a2ea84ee4b154bfdfbcc5cf36d308b85606c4c804fa28c06faab88af7.scope: Deactivated successfully.
Jan 20 19:02:52 compute-0 podman[77870]: 2026-01-20 19:02:52.21481708 +0000 UTC m=+0.177203560 container create 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:02:52 compute-0 podman[77870]: 2026-01-20 19:02:52.142879311 +0000 UTC m=+0.105265811 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:02:52 compute-0 systemd[1]: Started libpod-conmon-17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2.scope.
Jan 20 19:02:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c6c36ac192419ee4dd8cf6ef22b446af2e2fd785d35b101f0935c0c8a5af20/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c6c36ac192419ee4dd8cf6ef22b446af2e2fd785d35b101f0935c0c8a5af20/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54c6c36ac192419ee4dd8cf6ef22b446af2e2fd785d35b101f0935c0c8a5af20/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:52 compute-0 podman[77870]: 2026-01-20 19:02:52.404799124 +0000 UTC m=+0.367185624 container init 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:02:52 compute-0 podman[77870]: 2026-01-20 19:02:52.412458332 +0000 UTC m=+0.374844812 container start 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:52 compute-0 podman[77870]: 2026-01-20 19:02:52.416001083 +0000 UTC m=+0.378387593 container attach 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:52 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Jan 20 19:02:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3862612236' entity='client.admin' 
Jan 20 19:02:52 compute-0 infallible_yalow[77888]: set mgr/dashboard/cluster/status
Jan 20 19:02:52 compute-0 systemd[1]: libpod-17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2.scope: Deactivated successfully.
Jan 20 19:02:52 compute-0 podman[77870]: 2026-01-20 19:02:52.968963145 +0000 UTC m=+0.931349625 container died 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:02:52 compute-0 ceph-mon[75120]: from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:02:52 compute-0 ceph-mon[75120]: Added label _admin to host compute-0
Jan 20 19:02:52 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3862612236' entity='client.admin' 
Jan 20 19:02:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-54c6c36ac192419ee4dd8cf6ef22b446af2e2fd785d35b101f0935c0c8a5af20-merged.mount: Deactivated successfully.
Jan 20 19:02:53 compute-0 podman[77870]: 2026-01-20 19:02:53.003622812 +0000 UTC m=+0.966009292 container remove 17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2 (image=quay.io/ceph/ceph:v20, name=infallible_yalow, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:02:53 compute-0 systemd[1]: libpod-conmon-17842f71a525df82bb590c72d97ca678e0392279142e5938c4c69f77943279f2.scope: Deactivated successfully.
Jan 20 19:02:53 compute-0 systemd[1]: Reloading.
Jan 20 19:02:53 compute-0 systemd-sysv-generator[77959]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:53 compute-0 systemd-rc-local-generator[77954]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:53 compute-0 sudo[74062]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:53 compute-0 podman[77974]: 2026-01-20 19:02:53.500693468 +0000 UTC m=+0.043859509 container create 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 19:02:53 compute-0 systemd[1]: Started libpod-conmon-4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c.scope.
Jan 20 19:02:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:53 compute-0 podman[77974]: 2026-01-20 19:02:53.482697473 +0000 UTC m=+0.025863534 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:02:53 compute-0 podman[77974]: 2026-01-20 19:02:53.585488725 +0000 UTC m=+0.128654786 container init 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:53 compute-0 podman[77974]: 2026-01-20 19:02:53.593976963 +0000 UTC m=+0.137143004 container start 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 20 19:02:53 compute-0 podman[77974]: 2026-01-20 19:02:53.597525523 +0000 UTC m=+0.140691564 container attach 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:02:53 compute-0 sudo[78018]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwrybqqlplmutahctydusxzwnjqttfbz ; /usr/bin/python3'
Jan 20 19:02:53 compute-0 sudo[78018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:02:53 compute-0 python3[78020]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:02:53 compute-0 podman[78026]: 2026-01-20 19:02:53.905459527 +0000 UTC m=+0.043145165 container create b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:02:53 compute-0 systemd[1]: Started libpod-conmon-b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc.scope.
Jan 20 19:02:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e03491707b5b60c1fc6e696018b06b3494ce0cf1f147d137bb42097cfa7468/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6e03491707b5b60c1fc6e696018b06b3494ce0cf1f147d137bb42097cfa7468/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:53 compute-0 podman[78026]: 2026-01-20 19:02:53.882868821 +0000 UTC m=+0.020554259 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:54 compute-0 ceph-mon[75120]: pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:54 compute-0 podman[78026]: 2026-01-20 19:02:54.05152541 +0000 UTC m=+0.189210848 container init b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:02:54 compute-0 podman[78026]: 2026-01-20 19:02:54.057652434 +0000 UTC m=+0.195337852 container start b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]: [
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:     {
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "available": false,
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "being_replaced": false,
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "ceph_device_lvm": false,
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "lsm_data": {},
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "lvs": [],
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "path": "/dev/sr0",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "rejected_reasons": [
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "Has a FileSystem",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "Insufficient space (<5GB)"
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         ],
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         "sys_api": {
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "actuators": null,
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "device_nodes": [
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:                 "sr0"
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             ],
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "devname": "sr0",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "human_readable_size": "482.00 KB",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "id_bus": "ata",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "model": "QEMU DVD-ROM",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "nr_requests": "2",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "parent": "/dev/sr0",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "partitions": {},
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "path": "/dev/sr0",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "removable": "1",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "rev": "2.5+",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "ro": "0",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "rotational": "1",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "sas_address": "",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "sas_device_handle": "",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "scheduler_mode": "mq-deadline",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "sectors": 0,
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "sectorsize": "2048",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "size": 493568.0,
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "support_discard": "2048",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "type": "disk",
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:             "vendor": "QEMU"
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:         }
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]:     }
Jan 20 19:02:54 compute-0 amazing_lederberg[77990]: ]
Jan 20 19:02:54 compute-0 podman[78026]: 2026-01-20 19:02:54.084057844 +0000 UTC m=+0.221743262 container attach b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 20 19:02:54 compute-0 systemd[1]: libpod-4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c.scope: Deactivated successfully.
Jan 20 19:02:54 compute-0 podman[77974]: 2026-01-20 19:02:54.108864907 +0000 UTC m=+0.652030968 container died 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:02:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-17e29e971235cd4beddde4ac12da800473d2329e3f6f9f0fd51bbf31f7de6c7d-merged.mount: Deactivated successfully.
Jan 20 19:02:54 compute-0 podman[77974]: 2026-01-20 19:02:54.15013255 +0000 UTC m=+0.693298591 container remove 4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:02:54 compute-0 systemd[1]: libpod-conmon-4f53fa844093acd43ff91c06fccf576861f2e0123dab6b5385492d846cf1917c.scope: Deactivated successfully.
Jan 20 19:02:54 compute-0 sudo[77766]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:02:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:02:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:02:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:02:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 19:02:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 19:02:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:02:54 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:02:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:02:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:02:54 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Jan 20 19:02:54 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Jan 20 19:02:54 compute-0 sudo[78768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 19:02:54 compute-0 sudo[78768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[78768]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 sudo[78793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph
Jan 20 19:02:54 compute-0 sudo[78793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[78793]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 sudo[78818]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.conf.new
Jan 20 19:02:54 compute-0 sudo[78818]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[78818]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:54 compute-0 sudo[78843]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:54 compute-0 sudo[78843]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[78843]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Jan 20 19:02:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3594887429' entity='client.admin' 
Jan 20 19:02:54 compute-0 systemd[1]: libpod-b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc.scope: Deactivated successfully.
Jan 20 19:02:54 compute-0 podman[78026]: 2026-01-20 19:02:54.493280027 +0000 UTC m=+0.630965465 container died b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:54 compute-0 sudo[78869]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.conf.new
Jan 20 19:02:54 compute-0 sudo[78869]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[78869]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6e03491707b5b60c1fc6e696018b06b3494ce0cf1f147d137bb42097cfa7468-merged.mount: Deactivated successfully.
Jan 20 19:02:54 compute-0 podman[78026]: 2026-01-20 19:02:54.535651723 +0000 UTC m=+0.673337141 container remove b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc (image=quay.io/ceph/ceph:v20, name=mystifying_dirac, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:02:54 compute-0 systemd[1]: libpod-conmon-b28880939482b779964ace5878936a5fcaf7918248915d11f305f48ccf307ddc.scope: Deactivated successfully.
Jan 20 19:02:54 compute-0 sudo[78018]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 sudo[78932]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.conf.new
Jan 20 19:02:54 compute-0 sudo[78932]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[78932]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 sudo[78957]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.conf.new
Jan 20 19:02:54 compute-0 sudo[78957]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[78957]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 sudo[78982]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.conf.new /etc/ceph/ceph.conf
Jan 20 19:02:54 compute-0 sudo[78982]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[78982]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf
Jan 20 19:02:54 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf
Jan 20 19:02:54 compute-0 sudo[79007]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config
Jan 20 19:02:54 compute-0 sudo[79007]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[79007]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 sudo[79032]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config
Jan 20 19:02:54 compute-0 sudo[79032]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[79032]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 sudo[79057]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf.new
Jan 20 19:02:54 compute-0 sudo[79057]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[79057]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:54 compute-0 sudo[79107]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:54 compute-0 sudo[79107]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:54 compute-0 sudo[79107]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79159]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf.new
Jan 20 19:02:55 compute-0 sudo[79159]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79159]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79230]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf.new
Jan 20 19:02:55 compute-0 sudo[79230]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79230]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79255]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf.new
Jan 20 19:02:55 compute-0 sudo[79255]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79255]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf.new /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf
Jan 20 19:02:55 compute-0 sudo[79280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79280]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 19:02:55 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 19:02:55 compute-0 sudo[79329]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /etc/ceph
Jan 20 19:02:55 compute-0 sudo[79329]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79329]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79377]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph
Jan 20 19:02:55 compute-0 sudo[79377]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79377]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79425]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijwxbxfxildojrlceoymzpkgxlgdcvjp ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935774.8982103-36472-265219026780609/async_wrapper.py j364810835178 30 /home/zuul/.ansible/tmp/ansible-tmp-1768935774.8982103-36472-265219026780609/AnsiballZ_command.py _'
Jan 20 19:02:55 compute-0 sudo[79425]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:02:55 compute-0 sudo[79430]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.client.admin.keyring.new
Jan 20 19:02:55 compute-0 sudo[79430]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79430]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79455]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:55 compute-0 sudo[79455]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79455]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:55 compute-0 ansible-async_wrapper.py[79429]: Invoked with j364810835178 30 /home/zuul/.ansible/tmp/ansible-tmp-1768935774.8982103-36472-265219026780609/AnsiballZ_command.py _
Jan 20 19:02:55 compute-0 ansible-async_wrapper.py[79499]: Starting module and watcher
Jan 20 19:02:55 compute-0 ansible-async_wrapper.py[79499]: Start watching 79501 (30)
Jan 20 19:02:55 compute-0 ansible-async_wrapper.py[79501]: Start module (79501)
Jan 20 19:02:55 compute-0 ansible-async_wrapper.py[79429]: Return async_wrapper task started.
Jan 20 19:02:55 compute-0 sudo[79425]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79480]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.client.admin.keyring.new
Jan 20 19:02:55 compute-0 sudo[79480]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79480]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79533]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.client.admin.keyring.new
Jan 20 19:02:55 compute-0 sudo[79533]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79533]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 python3[79505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:02:55 compute-0 sudo[79558]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.client.admin.keyring.new
Jan 20 19:02:55 compute-0 sudo[79558]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79558]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/etc/ceph/ceph.client.admin.keyring.new /etc/ceph/ceph.client.admin.keyring
Jan 20 19:02:55 compute-0 sudo[79596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79596]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring
Jan 20 19:02:55 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring
Jan 20 19:02:55 compute-0 podman[79562]: 2026-01-20 19:02:55.693053526 +0000 UTC m=+0.023134474 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:02:55 compute-0 sudo[79621]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config
Jan 20 19:02:55 compute-0 sudo[79621]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79621]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79646]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mkdir -p /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config
Jan 20 19:02:55 compute-0 sudo[79646]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79646]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:55 compute-0 sudo[79671]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/touch /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring.new
Jan 20 19:02:55 compute-0 sudo[79671]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:55 compute-0 sudo[79671]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:56 compute-0 sudo[79696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R ceph-admin /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:56 compute-0 sudo[79696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:56 compute-0 sudo[79696]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:56 compute-0 sudo[79721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 644 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring.new
Jan 20 19:02:56 compute-0 sudo[79721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:56 compute-0 sudo[79721]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:56 compute-0 sudo[79769]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chown -R 0:0 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring.new
Jan 20 19:02:56 compute-0 sudo[79769]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:56 compute-0 sudo[79769]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:56 compute-0 sudo[79794]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/chmod 600 /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring.new
Jan 20 19:02:56 compute-0 sudo[79794]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:56 compute-0 sudo[79794]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:56 compute-0 sudo[79819]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/mv -Z /tmp/cephadm-90fff835-31df-513f-a409-b6642f04e6ac/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring.new /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring
Jan 20 19:02:56 compute-0 sudo[79819]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:56 compute-0 sudo[79819]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:56 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:56 compute-0 sudo[79890]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvkcknejdsybcgwmdufyasugwbeasjbp ; /usr/bin/python3'
Jan 20 19:02:56 compute-0 sudo[79890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:02:57 compute-0 python3[79892]: ansible-ansible.legacy.async_status Invoked with jid=j364810835178.79429 mode=status _async_dir=/root/.ansible_async
Jan 20 19:02:57 compute-0 sudo[79890]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:02:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:02:57 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:57 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:57 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:57 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:57 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 19:02:57 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:02:57 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:02:57 compute-0 ceph-mon[75120]: Updating compute-0:/etc/ceph/ceph.conf
Jan 20 19:02:57 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3594887429' entity='client.admin' 
Jan 20 19:02:57 compute-0 podman[79562]: 2026-01-20 19:02:57.337783016 +0000 UTC m=+1.667863964 container create aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:02:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:02:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:02:57 compute-0 systemd[1]: Started libpod-conmon-aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708.scope.
Jan 20 19:02:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:57 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 96519d7e-b245-4955-a0f0-3df65ad50e93 (Updating crash deployment (+1 -> 1))
Jan 20 19:02:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Jan 20 19:02:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 20 19:02:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 19:02:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:02:57 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:02:57 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Jan 20 19:02:57 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Jan 20 19:02:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e38ba1a7d52571b0b4e80a6119f539b06490c5a4613867c0bb40d9e0ba3c528/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e38ba1a7d52571b0b4e80a6119f539b06490c5a4613867c0bb40d9e0ba3c528/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:02:57 compute-0 podman[79562]: 2026-01-20 19:02:57.4327322 +0000 UTC m=+1.762813148 container init aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 19:02:57 compute-0 podman[79562]: 2026-01-20 19:02:57.441480021 +0000 UTC m=+1.771560949 container start aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:02:57 compute-0 podman[79562]: 2026-01-20 19:02:57.445264592 +0000 UTC m=+1.775345540 container attach aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:02:57 compute-0 sudo[79898]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:02:57 compute-0 sudo[79898]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:57 compute-0 sudo[79898]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:57 compute-0 sudo[79924]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:02:57 compute-0 sudo[79924]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:02:57 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:02:57 compute-0 exciting_northcutt[79895]: 
Jan 20 19:02:57 compute-0 exciting_northcutt[79895]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 19:02:57 compute-0 podman[80006]: 2026-01-20 19:02:57.866396688 +0000 UTC m=+0.038823327 container create 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 19:02:57 compute-0 systemd[1]: libpod-aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708.scope: Deactivated successfully.
Jan 20 19:02:57 compute-0 podman[79562]: 2026-01-20 19:02:57.880863305 +0000 UTC m=+2.210944233 container died aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:02:57 compute-0 systemd[1]: Started libpod-conmon-4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f.scope.
Jan 20 19:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-3e38ba1a7d52571b0b4e80a6119f539b06490c5a4613867c0bb40d9e0ba3c528-merged.mount: Deactivated successfully.
Jan 20 19:02:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:02:57 compute-0 podman[79562]: 2026-01-20 19:02:57.924269941 +0000 UTC m=+2.254350869 container remove aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708 (image=quay.io/ceph/ceph:v20, name=exciting_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:02:57 compute-0 podman[80006]: 2026-01-20 19:02:57.935504821 +0000 UTC m=+0.107931460 container init 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:02:57 compute-0 systemd[1]: libpod-conmon-aad56fce58f9d81e43c012b0bc598466faa86b6d15fc5147fbed931c69c33708.scope: Deactivated successfully.
Jan 20 19:02:57 compute-0 podman[80006]: 2026-01-20 19:02:57.941959361 +0000 UTC m=+0.114386010 container start 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 19:02:57 compute-0 ansible-async_wrapper.py[79501]: Module complete (79501)
Jan 20 19:02:57 compute-0 podman[80006]: 2026-01-20 19:02:57.847112881 +0000 UTC m=+0.019539540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:02:57 compute-0 podman[80006]: 2026-01-20 19:02:57.945943713 +0000 UTC m=+0.118370372 container attach 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:02:57 compute-0 gracious_babbage[80031]: 167 167
Jan 20 19:02:57 compute-0 systemd[1]: libpod-4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f.scope: Deactivated successfully.
Jan 20 19:02:57 compute-0 podman[80006]: 2026-01-20 19:02:57.948076526 +0000 UTC m=+0.120503165 container died 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:02:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a60d99ec6243eebc1cdd59c06027d1334d0bcad92433364efa7056ed8200bee-merged.mount: Deactivated successfully.
Jan 20 19:02:57 compute-0 podman[80006]: 2026-01-20 19:02:57.987047438 +0000 UTC m=+0.159474077 container remove 4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gracious_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:02:57 compute-0 systemd[1]: libpod-conmon-4f8f1b6d118ecf9f337982901794edaaadd0c362e26c20d6489ebeef3230b69f.scope: Deactivated successfully.
Jan 20 19:02:58 compute-0 sudo[80097]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgwimnvexkrsrjqpkkqoqdarsspviyre ; /usr/bin/python3'
Jan 20 19:02:58 compute-0 sudo[80097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:02:58 compute-0 systemd[1]: Reloading.
Jan 20 19:02:58 compute-0 python3[80099]: ansible-ansible.legacy.async_status Invoked with jid=j364810835178.79429 mode=status _async_dir=/root/.ansible_async
Jan 20 19:02:58 compute-0 sudo[80097]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:58 compute-0 systemd-sysv-generator[80129]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:58 compute-0 systemd-rc-local-generator[80124]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:58 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:02:58 compute-0 ceph-mon[75120]: Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.conf
Jan 20 19:02:58 compute-0 ceph-mon[75120]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Jan 20 19:02:58 compute-0 ceph-mon[75120]: pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:58 compute-0 ceph-mon[75120]: Updating compute-0:/var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/config/ceph.client.admin.keyring
Jan 20 19:02:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:02:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} : dispatch
Jan 20 19:02:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Jan 20 19:02:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:02:58 compute-0 ceph-mon[75120]: Deploying daemon crash.compute-0 on compute-0
Jan 20 19:02:58 compute-0 ceph-mon[75120]: pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:58 compute-0 sudo[80181]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyxwntkxhclurbydxmageqghnliovtxi ; /usr/bin/python3'
Jan 20 19:02:58 compute-0 sudo[80181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:02:58 compute-0 systemd[1]: Reloading.
Jan 20 19:02:58 compute-0 systemd-rc-local-generator[80213]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:02:58 compute-0 systemd-sysv-generator[80216]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:02:58 compute-0 python3[80185]: ansible-ansible.legacy.async_status Invoked with jid=j364810835178.79429 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 19:02:58 compute-0 sudo[80181]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:59 compute-0 systemd[1]: Starting Ceph crash.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:02:59 compute-0 sudo[80288]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqpjbjfifejxtsrryoeoltwxegmdnukb ; /usr/bin/python3'
Jan 20 19:02:59 compute-0 sudo[80288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:02:59 compute-0 podman[80304]: 2026-01-20 19:02:59.215337889 +0000 UTC m=+0.023393986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:02:59 compute-0 python3[80299]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 20 19:02:59 compute-0 sudo[80288]: pam_unix(sudo:session): session closed for user root
Jan 20 19:02:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:02:59 compute-0 sudo[80342]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qccnybdxsnmhioigwdjrqewgescbidmb ; /usr/bin/python3'
Jan 20 19:02:59 compute-0 sudo[80342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:02:59 compute-0 python3[80344]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:02:59 compute-0 podman[80304]: 2026-01-20 19:02:59.913802057 +0000 UTC m=+0.721858154 container create 6869885aa1d598b41af6be53eca6ba60937dcd7fe0247dfddbb485bce69e3fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:02:59 compute-0 ceph-mon[75120]: from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:03:00 compute-0 podman[80345]: 2026-01-20 19:03:00.018470099 +0000 UTC m=+0.177335776 container create e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c065cf03633176b665e17c85f8987b10ca8153a11f71c61e33790e4042a0826/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c065cf03633176b665e17c85f8987b10ca8153a11f71c61e33790e4042a0826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c065cf03633176b665e17c85f8987b10ca8153a11f71c61e33790e4042a0826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c065cf03633176b665e17c85f8987b10ca8153a11f71c61e33790e4042a0826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:00 compute-0 podman[80304]: 2026-01-20 19:03:00.041814681 +0000 UTC m=+0.849870738 container init 6869885aa1d598b41af6be53eca6ba60937dcd7fe0247dfddbb485bce69e3fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 20 19:03:00 compute-0 podman[80304]: 2026-01-20 19:03:00.050169953 +0000 UTC m=+0.858225990 container start 6869885aa1d598b41af6be53eca6ba60937dcd7fe0247dfddbb485bce69e3fde (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:03:00 compute-0 bash[80304]: 6869885aa1d598b41af6be53eca6ba60937dcd7fe0247dfddbb485bce69e3fde
Jan 20 19:03:00 compute-0 systemd[1]: Started libpod-conmon-e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00.scope.
Jan 20 19:03:00 compute-0 systemd[1]: Started Ceph crash.compute-0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:03:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b391b783d88f08ea291ba9220c36fdd53838eee853454996a16047cf28e0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b391b783d88f08ea291ba9220c36fdd53838eee853454996a16047cf28e0/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4206b391b783d88f08ea291ba9220c36fdd53838eee853454996a16047cf28e0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:00 compute-0 podman[80345]: 2026-01-20 19:02:59.997248629 +0000 UTC m=+0.156114336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:00 compute-0 sudo[79924]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: INFO:ceph-crash:pinging cluster to exercise our key
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.207+0000 7fefedc6a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.207+0000 7fefedc6a640 -1 AuthRegistry(0x7fefe8052930) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.208+0000 7fefedc6a640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.208+0000 7fefedc6a640 -1 AuthRegistry(0x7fefedc68fe0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.209+0000 7fefe77fe640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: 2026-01-20T19:03:00.209+0000 7fefedc6a640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: [errno 13] RADOS permission denied (error connecting to the cluster)
Jan 20 19:03:00 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-crash-compute-0[80360]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Jan 20 19:03:00 compute-0 podman[80345]: 2026-01-20 19:03:00.372296529 +0000 UTC m=+0.531162236 container init e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:00 compute-0 podman[80345]: 2026-01-20 19:03:00.381824367 +0000 UTC m=+0.540690064 container start e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 19:03:00 compute-0 podman[80345]: 2026-01-20 19:03:00.3860438 +0000 UTC m=+0.544909467 container attach e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 96519d7e-b245-4955-a0f0-3df65ad50e93 (Updating crash deployment (+1 -> 1))
Jan 20 19:03:00 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 96519d7e-b245-4955-a0f0-3df65ad50e93 (Updating crash deployment (+1 -> 1)) in 3 seconds
Jan 20 19:03:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev c2f9acc6-952a-4760-a159-ad9d63358ff9 (Updating mgr deployment (+1 -> 2))
Jan 20 19:03:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 19:03:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr services"} : dispatch
Jan 20 19:03:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:00 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.fpkyqm on compute-0
Jan 20 19:03:00 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.fpkyqm on compute-0
Jan 20 19:03:00 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:00 compute-0 sudo[80383]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:00 compute-0 sudo[80383]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:00 compute-0 sudo[80383]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:00 compute-0 ansible-async_wrapper.py[79499]: Done in kid B.
Jan 20 19:03:00 compute-0 sudo[80408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:03:00 compute-0 sudo[80408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:00 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:03:00 compute-0 focused_davinci[80367]: 
Jan 20 19:03:00 compute-0 focused_davinci[80367]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 19:03:00 compute-0 systemd[1]: libpod-e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00.scope: Deactivated successfully.
Jan 20 19:03:00 compute-0 podman[80345]: 2026-01-20 19:03:00.912823375 +0000 UTC m=+1.071689042 container died e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 19:03:00 compute-0 ceph-mon[75120]: pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.fpkyqm", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr services"} : dispatch
Jan 20 19:03:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:00 compute-0 ceph-mon[75120]: Deploying daemon mgr.compute-0.fpkyqm on compute-0
Jan 20 19:03:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-4206b391b783d88f08ea291ba9220c36fdd53838eee853454996a16047cf28e0-merged.mount: Deactivated successfully.
Jan 20 19:03:01 compute-0 podman[80491]: 2026-01-20 19:03:00.911527342 +0000 UTC m=+0.027401378 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:01 compute-0 podman[80491]: 2026-01-20 19:03:01.011028366 +0000 UTC m=+0.126902392 container create aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:01 compute-0 podman[80345]: 2026-01-20 19:03:01.016035627 +0000 UTC m=+1.174901294 container remove e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00 (image=quay.io/ceph/ceph:v20, name=focused_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:01 compute-0 systemd[1]: libpod-conmon-e86e51c7e8b47edb710a7b3d83103e2a5b5c464c3cd3ce6e5160dda840e05b00.scope: Deactivated successfully.
Jan 20 19:03:01 compute-0 sudo[80342]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:01 compute-0 systemd[1]: Started libpod-conmon-aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252.scope.
Jan 20 19:03:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:01 compute-0 podman[80491]: 2026-01-20 19:03:01.149552126 +0000 UTC m=+0.265426162 container init aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:03:01 compute-0 podman[80491]: 2026-01-20 19:03:01.155985116 +0000 UTC m=+0.271859132 container start aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:03:01 compute-0 quirky_solomon[80524]: 167 167
Jan 20 19:03:01 compute-0 systemd[1]: libpod-aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252.scope: Deactivated successfully.
Jan 20 19:03:01 compute-0 podman[80491]: 2026-01-20 19:03:01.159491664 +0000 UTC m=+0.275365680 container attach aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:03:01 compute-0 podman[80491]: 2026-01-20 19:03:01.160731133 +0000 UTC m=+0.276605159 container died aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-ace5946be003c75ae06bfb55f8b4f6a0435ef3c1e73c1d53303888219c87ed34-merged.mount: Deactivated successfully.
Jan 20 19:03:01 compute-0 podman[80491]: 2026-01-20 19:03:01.197170565 +0000 UTC m=+0.313044581 container remove aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_solomon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:03:01 compute-0 systemd[1]: libpod-conmon-aa52b084c167054557251df14c2f3ed6900b4445e04fce733144b97f1857d252.scope: Deactivated successfully.
Jan 20 19:03:01 compute-0 systemd[1]: Reloading.
Jan 20 19:03:01 compute-0 systemd-rc-local-generator[80580]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:01 compute-0 systemd-sysv-generator[80584]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:01 compute-0 sudo[80599]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thfeqsuhrjnkrznejvkmhbqapuemwoqv ; /usr/bin/python3'
Jan 20 19:03:01 compute-0 sudo[80599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:01 compute-0 systemd[1]: Reloading.
Jan 20 19:03:01 compute-0 systemd-rc-local-generator[80629]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:01 compute-0 systemd-sysv-generator[80637]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:01 compute-0 python3[80603]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:01 compute-0 podman[80641]: 2026-01-20 19:03:01.734834283 +0000 UTC m=+0.102964332 container create 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:01 compute-0 podman[80641]: 2026-01-20 19:03:01.656768789 +0000 UTC m=+0.024898848 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:01 compute-0 systemd[1]: Started libpod-conmon-7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa.scope.
Jan 20 19:03:01 compute-0 systemd[1]: Starting Ceph mgr.compute-0.fpkyqm for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:03:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d23de597c346b43cd96cde34a57db9623e5304f72845d8819e3394d0ff85414/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d23de597c346b43cd96cde34a57db9623e5304f72845d8819e3394d0ff85414/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d23de597c346b43cd96cde34a57db9623e5304f72845d8819e3394d0ff85414/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:01 compute-0 podman[80641]: 2026-01-20 19:03:01.979135907 +0000 UTC m=+0.347265966 container init 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:03:01 compute-0 podman[80641]: 2026-01-20 19:03:01.986281001 +0000 UTC m=+0.354411040 container start 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:03:02 compute-0 ceph-mon[75120]: from='client.14168 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:03:02 compute-0 podman[80641]: 2026-01-20 19:03:02.000466003 +0000 UTC m=+0.368596112 container attach 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:03:02 compute-0 ceph-mon[75120]: pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:02 compute-0 podman[80713]: 2026-01-20 19:03:02.159379832 +0000 UTC m=+0.051074335 container create 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:02 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd/merged/var/lib/ceph/mgr/ceph-compute-0.fpkyqm supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:02 compute-0 podman[80713]: 2026-01-20 19:03:02.228657323 +0000 UTC m=+0.120351856 container init 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 19:03:02 compute-0 podman[80713]: 2026-01-20 19:03:02.133445606 +0000 UTC m=+0.025140119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:02 compute-0 podman[80713]: 2026-01-20 19:03:02.234662182 +0000 UTC m=+0.126356685 container start 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:02 compute-0 bash[80713]: 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c
Jan 20 19:03:02 compute-0 systemd[1]: Started Ceph mgr.compute-0.fpkyqm for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:03:02 compute-0 ceph-mgr[80749]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:03:02 compute-0 ceph-mgr[80749]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mgr, pid 2
Jan 20 19:03:02 compute-0 ceph-mgr[80749]: pidfile_write: ignore empty --pid-file
Jan 20 19:03:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:02 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'alerts'
Jan 20 19:03:02 compute-0 sudo[80408]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Jan 20 19:03:02 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:02 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'balancer'
Jan 20 19:03:02 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:02 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/688908581' entity='client.admin' 
Jan 20 19:03:02 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:02 compute-0 systemd[1]: libpod-7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa.scope: Deactivated successfully.
Jan 20 19:03:02 compute-0 podman[80641]: 2026-01-20 19:03:02.51627529 +0000 UTC m=+0.884405339 container died 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 19:03:02 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'cephadm'
Jan 20 19:03:02 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:02 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev c2f9acc6-952a-4760-a159-ad9d63358ff9 (Updating mgr deployment (+1 -> 2))
Jan 20 19:03:02 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event c2f9acc6-952a-4760-a159-ad9d63358ff9 (Updating mgr deployment (+1 -> 2)) in 2 seconds
Jan 20 19:03:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 19:03:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d23de597c346b43cd96cde34a57db9623e5304f72845d8819e3394d0ff85414-merged.mount: Deactivated successfully.
Jan 20 19:03:02 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:02 compute-0 sudo[80784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:03:02 compute-0 sudo[80784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:02 compute-0 sudo[80784]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:02 compute-0 podman[80641]: 2026-01-20 19:03:02.670516965 +0000 UTC m=+1.038647004 container remove 7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa (image=quay.io/ceph/ceph:v20, name=mystifying_turing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:02 compute-0 systemd[1]: libpod-conmon-7c4f1459fa68d852cc735f2b1d542b24a2dc41e243739d24c6da6ce656d691fa.scope: Deactivated successfully.
Jan 20 19:03:02 compute-0 sudo[80599]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:02 compute-0 sudo[80809]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:02 compute-0 sudo[80809]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:02 compute-0 sudo[80809]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:02 compute-0 sudo[80834]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:03:02 compute-0 sudo[80834]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:02 compute-0 sudo[80882]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpwiyonwthtbrzqkidhiuwhlrvukmflf ; /usr/bin/python3'
Jan 20 19:03:02 compute-0 sudo[80882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:02 compute-0 python3[80884]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:03 compute-0 podman[80896]: 2026-01-20 19:03:03.040897472 +0000 UTC m=+0.022232420 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:03 compute-0 podman[80896]: 2026-01-20 19:03:03.174377408 +0000 UTC m=+0.155712336 container create 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:03:03 compute-0 systemd[1]: Started libpod-conmon-95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317.scope.
Jan 20 19:03:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd57857604ecde58115538786074b50f129ee9ef3e7cd66c58de559bd0aadb53/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd57857604ecde58115538786074b50f129ee9ef3e7cd66c58de559bd0aadb53/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd57857604ecde58115538786074b50f129ee9ef3e7cd66c58de559bd0aadb53/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:03 compute-0 podman[80896]: 2026-01-20 19:03:03.280546073 +0000 UTC m=+0.261881021 container init 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:03:03 compute-0 podman[80896]: 2026-01-20 19:03:03.287723188 +0000 UTC m=+0.269058116 container start 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:03 compute-0 podman[80896]: 2026-01-20 19:03:03.293488975 +0000 UTC m=+0.274823903 container attach 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:03 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'crash'
Jan 20 19:03:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:03 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/688908581' entity='client.admin' 
Jan 20 19:03:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:03 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'dashboard'
Jan 20 19:03:03 compute-0 podman[80957]: 2026-01-20 19:03:03.496711484 +0000 UTC m=+0.194389795 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:03:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:03 compute-0 podman[80957]: 2026-01-20 19:03:03.665753101 +0000 UTC m=+0.363431412 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3844776602' entity='client.admin' 
Jan 20 19:03:04 compute-0 systemd[1]: libpod-95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317.scope: Deactivated successfully.
Jan 20 19:03:04 compute-0 podman[80896]: 2026-01-20 19:03:04.140067094 +0000 UTC m=+1.121402052 container died 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:03:04 compute-0 sudo[80834]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd57857604ecde58115538786074b50f129ee9ef3e7cd66c58de559bd0aadb53-merged.mount: Deactivated successfully.
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:04 compute-0 podman[80896]: 2026-01-20 19:03:04.214927572 +0000 UTC m=+1.196262500 container remove 95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317 (image=quay.io/ceph/ceph:v20, name=pensive_napier, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:04 compute-0 systemd[1]: libpod-conmon-95c05770aa6d51be5732a13c058231e75b28ac19b47f0b2b409c4ea0d25f7317.scope: Deactivated successfully.
Jan 20 19:03:04 compute-0 sudo[80882]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:04 compute-0 sudo[81106]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:03:04 compute-0 sudo[81106]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:04 compute-0 sudo[81106]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 19:03:04 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'devicehealth'
Jan 20 19:03:04 compute-0 sudo[81131]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:04 compute-0 sudo[81131]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:04 compute-0 sudo[81131]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:04 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'diskprediction_local'
Jan 20 19:03:04 compute-0 sudo[81156]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:03:04 compute-0 sudo[81156]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:04 compute-0 sudo[81204]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-esjojgvuamgprdhwpervdeuaefusmqzn ; /usr/bin/python3'
Jan 20 19:03:04 compute-0 sudo[81204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [progress INFO root] Writing back 2 completed events
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 19:03:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:04 compute-0 ceph-mon[75120]: pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3844776602' entity='client.admin' 
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "mon."} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "mon", "key": "public_network"} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:04 compute-0 python3[81206]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:04 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm[80745]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Jan 20 19:03:04 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm[80745]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Jan 20 19:03:04 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm[80745]:   from numpy import show_config as show_numpy_config
Jan 20 19:03:04 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'influx'
Jan 20 19:03:04 compute-0 podman[81207]: 2026-01-20 19:03:04.620825296 +0000 UTC m=+0.039556612 container create 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 19:03:04 compute-0 systemd[1]: Started libpod-conmon-3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2.scope.
Jan 20 19:03:04 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'insights'
Jan 20 19:03:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ef0647be8a103f238932a9ef55f90f5c909144306c27a82dd8ad9ac147756b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ef0647be8a103f238932a9ef55f90f5c909144306c27a82dd8ad9ac147756b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11ef0647be8a103f238932a9ef55f90f5c909144306c27a82dd8ad9ac147756b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:04 compute-0 podman[81207]: 2026-01-20 19:03:04.603946295 +0000 UTC m=+0.022677631 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:04 compute-0 podman[81207]: 2026-01-20 19:03:04.75092273 +0000 UTC m=+0.169654066 container init 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:04 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'iostat'
Jan 20 19:03:04 compute-0 podman[81207]: 2026-01-20 19:03:04.782877077 +0000 UTC m=+0.201608393 container start 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 19:03:04 compute-0 podman[81207]: 2026-01-20 19:03:04.786260649 +0000 UTC m=+0.204991965 container attach 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:04 compute-0 podman[81239]: 2026-01-20 19:03:04.816774166 +0000 UTC m=+0.088300626 container create 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:04 compute-0 systemd[1]: Started libpod-conmon-78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b.scope.
Jan 20 19:03:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:04 compute-0 podman[81239]: 2026-01-20 19:03:04.792013415 +0000 UTC m=+0.063539905 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:04 compute-0 podman[81239]: 2026-01-20 19:03:04.894511974 +0000 UTC m=+0.166038464 container init 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:04 compute-0 podman[81239]: 2026-01-20 19:03:04.900737593 +0000 UTC m=+0.172264053 container start 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:04 compute-0 elastic_tharp[81257]: 167 167
Jan 20 19:03:04 compute-0 systemd[1]: libpod-78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b.scope: Deactivated successfully.
Jan 20 19:03:04 compute-0 conmon[81257]: conmon 78414330d95eb7e3faa5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b.scope/container/memory.events
Jan 20 19:03:04 compute-0 podman[81239]: 2026-01-20 19:03:04.9052609 +0000 UTC m=+0.176787390 container attach 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:03:04 compute-0 podman[81239]: 2026-01-20 19:03:04.906224546 +0000 UTC m=+0.177751026 container died 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:04 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'k8sevents'
Jan 20 19:03:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-962001b968761b6260d041ea7165dc1f862254fdebcb384ce388978f7edfede1-merged.mount: Deactivated successfully.
Jan 20 19:03:04 compute-0 podman[81239]: 2026-01-20 19:03:04.947578585 +0000 UTC m=+0.219105045 container remove 78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b (image=quay.io/ceph/ceph:v20, name=elastic_tharp, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 19:03:04 compute-0 systemd[1]: libpod-conmon-78414330d95eb7e3faa5fe24c910377cd52e87776e257589157438b28de1b67b.scope: Deactivated successfully.
Jan 20 19:03:05 compute-0 sudo[81156]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:05 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.meyjbf (unknown last config time)...
Jan 20 19:03:05 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.meyjbf (unknown last config time)...
Jan 20 19:03:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.meyjbf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Jan 20 19:03:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.meyjbf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 20 19:03:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 19:03:05 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr services"} : dispatch
Jan 20 19:03:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:05 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:05 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.meyjbf on compute-0
Jan 20 19:03:05 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.meyjbf on compute-0
Jan 20 19:03:05 compute-0 sudo[81293]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:05 compute-0 sudo[81293]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:05 compute-0 sudo[81293]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:05 compute-0 sudo[81318]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph:v20 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:03:05 compute-0 sudo[81318]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Jan 20 19:03:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2697145801' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 20 19:03:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:05 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'localpool'
Jan 20 19:03:05 compute-0 podman[81363]: 2026-01-20 19:03:05.534226518 +0000 UTC m=+0.032007510 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:05 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'mds_autoscaler'
Jan 20 19:03:06 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'mirroring'
Jan 20 19:03:06 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'nfs'
Jan 20 19:03:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Jan 20 19:03:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:06 compute-0 ceph-mon[75120]: Reconfiguring mon.compute-0 (unknown last config time)...
Jan 20 19:03:06 compute-0 ceph-mon[75120]: Reconfiguring daemon mon.compute-0 on compute-0
Jan 20 19:03:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mgr.compute-0.meyjbf", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} : dispatch
Jan 20 19:03:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr services"} : dispatch
Jan 20 19:03:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:06 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2697145801' entity='client.admin' cmd={"prefix": "osd set-require-min-compat-client", "version": "mimic"} : dispatch
Jan 20 19:03:06 compute-0 podman[81363]: 2026-01-20 19:03:06.17347561 +0000 UTC m=+0.671256582 container create 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:03:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2697145801' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 20 19:03:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Jan 20 19:03:06 compute-0 friendly_murdock[81234]: set require_min_compat_client to mimic
Jan 20 19:03:06 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Jan 20 19:03:06 compute-0 systemd[1]: Started libpod-conmon-90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77.scope.
Jan 20 19:03:06 compute-0 systemd[1]: libpod-3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2.scope: Deactivated successfully.
Jan 20 19:03:06 compute-0 conmon[81234]: conmon 3302e0e7ab64dc38c761 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2.scope/container/memory.events
Jan 20 19:03:06 compute-0 podman[81207]: 2026-01-20 19:03:06.22048177 +0000 UTC m=+1.639213106 container died 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:03:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-11ef0647be8a103f238932a9ef55f90f5c909144306c27a82dd8ad9ac147756b-merged.mount: Deactivated successfully.
Jan 20 19:03:06 compute-0 podman[81363]: 2026-01-20 19:03:06.273015685 +0000 UTC m=+0.770796667 container init 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 19:03:06 compute-0 podman[81207]: 2026-01-20 19:03:06.279691026 +0000 UTC m=+1.698422342 container remove 3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2 (image=quay.io/ceph/ceph:v20, name=friendly_murdock, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:03:06 compute-0 podman[81363]: 2026-01-20 19:03:06.283674438 +0000 UTC m=+0.781455410 container start 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle)
Jan 20 19:03:06 compute-0 podman[81363]: 2026-01-20 19:03:06.28684389 +0000 UTC m=+0.784624882 container attach 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:03:06 compute-0 great_mestorf[81381]: 167 167
Jan 20 19:03:06 compute-0 systemd[1]: libpod-90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77.scope: Deactivated successfully.
Jan 20 19:03:06 compute-0 podman[81363]: 2026-01-20 19:03:06.290489586 +0000 UTC m=+0.788270558 container died 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 19:03:06 compute-0 systemd[1]: libpod-conmon-3302e0e7ab64dc38c761f039069f3bf845e8045b4b750dc48c88affc23b500a2.scope: Deactivated successfully.
Jan 20 19:03:06 compute-0 sudo[81204]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-e7afb7e98c7f8ed49957957176e65b5345c4400f368b229e1f24599b93ef6c5c-merged.mount: Deactivated successfully.
Jan 20 19:03:06 compute-0 podman[81363]: 2026-01-20 19:03:06.334448828 +0000 UTC m=+0.832229820 container remove 90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77 (image=quay.io/ceph/ceph:v20, name=great_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:06 compute-0 systemd[1]: libpod-conmon-90e4b3c765c38e0a555b7d736be68272e411a5edf81382f9fef602e887ef4b77.scope: Deactivated successfully.
Jan 20 19:03:06 compute-0 sudo[81318]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:06 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:06 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'orchestrator'
Jan 20 19:03:06 compute-0 sudo[81413]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:06 compute-0 sudo[81413]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:06 compute-0 sudo[81413]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:06 compute-0 sudo[81438]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:03:06 compute-0 sudo[81438]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:06 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'osd_perf_query'
Jan 20 19:03:06 compute-0 sudo[81493]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-puanaoikniazcucdvgimduinixgxlree ; /usr/bin/python3'
Jan 20 19:03:06 compute-0 sudo[81493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:06 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'osd_support'
Jan 20 19:03:06 compute-0 python3[81502]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:06 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'pg_autoscaler'
Jan 20 19:03:06 compute-0 podman[81532]: 2026-01-20 19:03:06.965630452 +0000 UTC m=+0.058839119 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:07 compute-0 podman[81534]: 2026-01-20 19:03:06.950803809 +0000 UTC m=+0.030952949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:07 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'progress'
Jan 20 19:03:07 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'prometheus'
Jan 20 19:03:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:07 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'rbd_support'
Jan 20 19:03:07 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'rgw'
Jan 20 19:03:08 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'rook'
Jan 20 19:03:08 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:08 compute-0 podman[81534]: 2026-01-20 19:03:08.450734378 +0000 UTC m=+1.530883518 container create 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:03:08 compute-0 ceph-mon[75120]: Reconfiguring mgr.compute-0.meyjbf (unknown last config time)...
Jan 20 19:03:08 compute-0 ceph-mon[75120]: Reconfiguring daemon mgr.compute-0.meyjbf on compute-0
Jan 20 19:03:08 compute-0 ceph-mon[75120]: pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:08 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2697145801' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Jan 20 19:03:08 compute-0 ceph-mon[75120]: osdmap e3: 0 total, 0 up, 0 in
Jan 20 19:03:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:08 compute-0 systemd[1]: Started libpod-conmon-0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474.scope.
Jan 20 19:03:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8921e9a173d35b37480c68980dae12f1e0b894dc503b377a46f7c77b73f3cb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8921e9a173d35b37480c68980dae12f1e0b894dc503b377a46f7c77b73f3cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df8921e9a173d35b37480c68980dae12f1e0b894dc503b377a46f7c77b73f3cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:08 compute-0 podman[81532]: 2026-01-20 19:03:08.526308081 +0000 UTC m=+1.619516718 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 19:03:08 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'selftest'
Jan 20 19:03:08 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'smb'
Jan 20 19:03:09 compute-0 podman[81534]: 2026-01-20 19:03:09.032653024 +0000 UTC m=+2.112802204 container init 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:09 compute-0 podman[81534]: 2026-01-20 19:03:09.039670962 +0000 UTC m=+2.119820102 container start 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:09 compute-0 podman[81534]: 2026-01-20 19:03:09.043414742 +0000 UTC m=+2.123563942 container attach 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:03:09 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'snap_schedule'
Jan 20 19:03:09 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'stats'
Jan 20 19:03:09 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'status'
Jan 20 19:03:09 compute-0 sudo[81438]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:09 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'telegraf'
Jan 20 19:03:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:09 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:09 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:09 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:03:09 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:03:09 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:09 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'telemetry'
Jan 20 19:03:09 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:03:09 compute-0 sudo[81685]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:03:09 compute-0 sudo[81685]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:09 compute-0 sudo[81685]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:09 compute-0 sudo[81711]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:09 compute-0 sudo[81711]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:09 compute-0 sudo[81711]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:09 compute-0 sudo[81736]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host --expect-hostname compute-0
Jan 20 19:03:09 compute-0 sudo[81736]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:09 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'test_orchestrator'
Jan 20 19:03:09 compute-0 ceph-mon[75120]: pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:09 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:09 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:09 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:09 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:09 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:09 compute-0 sudo[81736]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 19:03:09 compute-0 ceph-mgr[80749]: mgr[py] Loading python module 'volumes'
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: [cephadm INFO root] Added host compute-0
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Added host compute-0
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service mon spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 755fa7d6-748c-4d8d-8256-596ee6d1df92 (Updating mgr deployment (-1 -> 1))
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.fpkyqm from compute-0 -- ports [8765]
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.fpkyqm from compute-0 -- ports [8765]
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 distracted_lehmann[81579]: Added host 'compute-0' with addr '192.168.122.100'
Jan 20 19:03:10 compute-0 distracted_lehmann[81579]: Scheduled mon update...
Jan 20 19:03:10 compute-0 distracted_lehmann[81579]: Scheduled mgr update...
Jan 20 19:03:10 compute-0 distracted_lehmann[81579]: Scheduled osd.default_drive_group update...
Jan 20 19:03:10 compute-0 systemd[1]: libpod-0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474.scope: Deactivated successfully.
Jan 20 19:03:10 compute-0 podman[81534]: 2026-01-20 19:03:10.150229195 +0000 UTC m=+3.230378335 container died 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 19:03:10 compute-0 sudo[81781]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:10 compute-0 sudo[81781]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:10 compute-0 sudo[81781]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:10 compute-0 sudo[81816]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 rm-daemon --fsid 90fff835-31df-513f-a409-b6642f04e6ac --name mgr.compute-0.fpkyqm --force --tcp-ports 8765
Jan 20 19:03:10 compute-0 sudo[81816]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:10 compute-0 ceph-mgr[80749]: ms_deliver_dispatch: unhandled message 0x55653fa32000 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Jan 20 19:03:10 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : Standby manager daemon compute-0.fpkyqm started
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from mgr.compute-0.fpkyqm 192.168.122.100:0/611470075; not ready for session (expect reconnect)
Jan 20 19:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-df8921e9a173d35b37480c68980dae12f1e0b894dc503b377a46f7c77b73f3cb-merged.mount: Deactivated successfully.
Jan 20 19:03:10 compute-0 podman[81534]: 2026-01-20 19:03:10.304566592 +0000 UTC m=+3.384715732 container remove 0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474 (image=quay.io/ceph/ceph:v20, name=distracted_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:10 compute-0 systemd[1]: libpod-conmon-0c6371f6fed8c9e97b23915a56c78ea5e60d24cd403b28feacf96b437a386474.scope: Deactivated successfully.
Jan 20 19:03:10 compute-0 sudo[81493]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:10 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:10 compute-0 systemd[1]: Stopping Ceph mgr.compute-0.fpkyqm for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:03:10 compute-0 sudo[81896]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqskcjjeougwljhhdyjirykmqkhzfjbt ; /usr/bin/python3'
Jan 20 19:03:10 compute-0 sudo[81896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='client.14178 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:03:10 compute-0 ceph-mon[75120]: pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: Added host compute-0
Jan 20 19:03:10 compute-0 ceph-mon[75120]: Saving service mon spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: Saving service mgr spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: Marking host: compute-0 for OSDSpec preview refresh.
Jan 20 19:03:10 compute-0 ceph-mon[75120]: Saving service osd.default_drive_group spec with placement compute-0
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: Removing daemon mgr.compute-0.fpkyqm from compute-0 -- ports [8765]
Jan 20 19:03:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:10 compute-0 ceph-mon[75120]: Standby manager daemon compute-0.fpkyqm started
Jan 20 19:03:10 compute-0 python3[81904]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:10 compute-0 podman[81912]: 2026-01-20 19:03:10.796254365 +0000 UTC m=+0.114201992 container died 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:10 compute-0 podman[81928]: 2026-01-20 19:03:10.846300467 +0000 UTC m=+0.059423066 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7eb3b5d27dc6d43ea43c47e99c6297833ccb281e937aa683bf57928bf98e13cd-merged.mount: Deactivated successfully.
Jan 20 19:03:10 compute-0 podman[81928]: 2026-01-20 19:03:10.982338438 +0000 UTC m=+0.195461017 container create 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:11 compute-0 podman[81912]: 2026-01-20 19:03:11.082614616 +0000 UTC m=+0.400562243 container remove 189ff4639020685c49a2a772efc4ae6a313b837fc248990d3a29623287f2b42c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:03:11 compute-0 bash[81912]: ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-fpkyqm
Jan 20 19:03:11 compute-0 systemd[1]: Started libpod-conmon-779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac.scope.
Jan 20 19:03:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdf1080445df14b3a93944e290892acac822e55903c63cb37d8cb496943ec2c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdf1080445df14b3a93944e290892acac822e55903c63cb37d8cb496943ec2c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:11 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecdf1080445df14b3a93944e290892acac822e55903c63cb37d8cb496943ec2c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:11 compute-0 podman[81928]: 2026-01-20 19:03:11.178645394 +0000 UTC m=+0.391768003 container init 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:03:11 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.meyjbf(active, since 40s), standbys: compute-0.fpkyqm
Jan 20 19:03:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.fpkyqm", "id": "compute-0.fpkyqm"} v 0)
Jan 20 19:03:11 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.fpkyqm", "id": "compute-0.fpkyqm"} : dispatch
Jan 20 19:03:11 compute-0 podman[81928]: 2026-01-20 19:03:11.189177715 +0000 UTC m=+0.402300284 container start 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:03:11 compute-0 systemd[1]: ceph-90fff835-31df-513f-a409-b6642f04e6ac@mgr.compute-0.fpkyqm.service: Main process exited, code=exited, status=143/n/a
Jan 20 19:03:11 compute-0 podman[81928]: 2026-01-20 19:03:11.193559449 +0000 UTC m=+0.406682028 container attach 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:03:11 compute-0 systemd[1]: ceph-90fff835-31df-513f-a409-b6642f04e6ac@mgr.compute-0.fpkyqm.service: Failed with result 'exit-code'.
Jan 20 19:03:11 compute-0 systemd[1]: Stopped Ceph mgr.compute-0.fpkyqm for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:03:11 compute-0 systemd[1]: ceph-90fff835-31df-513f-a409-b6642f04e6ac@mgr.compute-0.fpkyqm.service: Consumed 8.887s CPU time, 464.4M memory peak, read 0B from disk, written 832.5K to disk.
Jan 20 19:03:11 compute-0 systemd[1]: Reloading.
Jan 20 19:03:11 compute-0 systemd-rc-local-generator[82030]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:11 compute-0 systemd-sysv-generator[82034]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:11 compute-0 sudo[81816]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:11 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.fpkyqm
Jan 20 19:03:11 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.fpkyqm
Jan 20 19:03:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"} v 0)
Jan 20 19:03:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"} : dispatch
Jan 20 19:03:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 19:03:11 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2668631957' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 19:03:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"}]': finished
Jan 20 19:03:11 compute-0 frosty_kilby[81958]: 
Jan 20 19:03:11 compute-0 frosty_kilby[81958]: {"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":64,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-01-20T19:02:04:930609+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-01-20T19:02:04.932596+0000","services":{}},"progress_events":{}}
Jan 20 19:03:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 19:03:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:11 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 755fa7d6-748c-4d8d-8256-596ee6d1df92 (Updating mgr deployment (-1 -> 1))
Jan 20 19:03:11 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 755fa7d6-748c-4d8d-8256-596ee6d1df92 (Updating mgr deployment (-1 -> 1)) in 2 seconds
Jan 20 19:03:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Jan 20 19:03:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:11 compute-0 systemd[1]: libpod-779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac.scope: Deactivated successfully.
Jan 20 19:03:11 compute-0 conmon[81958]: conmon 779e7814d2a5f361bc26 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac.scope/container/memory.events
Jan 20 19:03:11 compute-0 podman[81928]: 2026-01-20 19:03:11.803159861 +0000 UTC m=+1.016282440 container died 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:03:11 compute-0 sudo[82053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:03:11 compute-0 sudo[82053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-ecdf1080445df14b3a93944e290892acac822e55903c63cb37d8cb496943ec2c-merged.mount: Deactivated successfully.
Jan 20 19:03:11 compute-0 sudo[82053]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:11 compute-0 podman[81928]: 2026-01-20 19:03:11.895911 +0000 UTC m=+1.109033579 container remove 779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac (image=quay.io/ceph/ceph:v20, name=frosty_kilby, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:11 compute-0 systemd[1]: libpod-conmon-779e7814d2a5f361bc269d3be9f5abd55d547fe2c105e465ef86c7eb31153fac.scope: Deactivated successfully.
Jan 20 19:03:11 compute-0 sudo[82090]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:11 compute-0 sudo[82090]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:11 compute-0 sudo[82090]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:11 compute-0 sudo[81896]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:11 compute-0 sudo[82115]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:03:11 compute-0 sudo[82115]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mgrmap e9: compute-0.meyjbf(active, since 40s), standbys: compute-0.fpkyqm
Jan 20 19:03:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mgr metadata", "who": "compute-0.fpkyqm", "id": "compute-0.fpkyqm"} : dispatch
Jan 20 19:03:12 compute-0 ceph-mon[75120]: pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:12 compute-0 ceph-mon[75120]: Removing key for mgr.compute-0.fpkyqm
Jan 20 19:03:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"} : dispatch
Jan 20 19:03:12 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2668631957' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 19:03:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.fpkyqm"}]': finished
Jan 20 19:03:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:12 compute-0 podman[82183]: 2026-01-20 19:03:12.423419226 +0000 UTC m=+0.074889015 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:12 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:12 compute-0 podman[82183]: 2026-01-20 19:03:12.546692993 +0000 UTC m=+0.198162742 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:12 compute-0 sudo[82115]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:03:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:12 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:13 compute-0 sudo[82280]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:13 compute-0 sudo[82280]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:13 compute-0 sudo[82280]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:13 compute-0 sudo[82305]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:03:13 compute-0 sudo[82305]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:13 compute-0 podman[82341]: 2026-01-20 19:03:13.302016375 +0000 UTC m=+0.021811760 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:13 compute-0 podman[82341]: 2026-01-20 19:03:13.403542614 +0000 UTC m=+0.123337989 container create 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:13 compute-0 systemd[1]: Started libpod-conmon-2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c.scope.
Jan 20 19:03:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:13 compute-0 podman[82341]: 2026-01-20 19:03:13.620350379 +0000 UTC m=+0.340145824 container init 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:13 compute-0 podman[82341]: 2026-01-20 19:03:13.628647747 +0000 UTC m=+0.348443102 container start 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 19:03:13 compute-0 systemd[1]: libpod-2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c.scope: Deactivated successfully.
Jan 20 19:03:13 compute-0 podman[82341]: 2026-01-20 19:03:13.633103343 +0000 UTC m=+0.352898728 container attach 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:13 compute-0 suspicious_darwin[82357]: 167 167
Jan 20 19:03:13 compute-0 conmon[82357]: conmon 2ec9da98f0bad3fe8418 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c.scope/container/memory.events
Jan 20 19:03:13 compute-0 podman[82341]: 2026-01-20 19:03:13.634540877 +0000 UTC m=+0.354336242 container died 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:03:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-41d91164838d8e2e269d7742be9db0a41b54cb5cf7e44756079f6eef0eca6a19-merged.mount: Deactivated successfully.
Jan 20 19:03:13 compute-0 podman[82341]: 2026-01-20 19:03:13.685787607 +0000 UTC m=+0.405582952 container remove 2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:03:13 compute-0 systemd[1]: libpod-conmon-2ec9da98f0bad3fe8418f4d4ff197bae42673a241554ee2b21a8b21706c4a74c.scope: Deactivated successfully.
Jan 20 19:03:13 compute-0 podman[82380]: 2026-01-20 19:03:13.832667316 +0000 UTC m=+0.024382921 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:14 compute-0 podman[82380]: 2026-01-20 19:03:14.065524204 +0000 UTC m=+0.257239799 container create 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:03:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:14 compute-0 ceph-mon[75120]: pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:14 compute-0 systemd[1]: Started libpod-conmon-0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960.scope.
Jan 20 19:03:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:14 compute-0 podman[82380]: 2026-01-20 19:03:14.297261184 +0000 UTC m=+0.488976839 container init 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 19:03:14 compute-0 podman[82380]: 2026-01-20 19:03:14.31051688 +0000 UTC m=+0.502232445 container start 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:03:14 compute-0 podman[82380]: 2026-01-20 19:03:14.315520829 +0000 UTC m=+0.507236384 container attach 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:14 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:14 compute-0 ceph-mgr[75417]: [progress INFO root] Writing back 3 completed events
Jan 20 19:03:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 19:03:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new ea83dc26-7f71-429f-b9c1-f87c51d6aebb
Jan 20 19:03:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"} v 0)
Jan 20 19:03:15 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2624241486' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"} : dispatch
Jan 20 19:03:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Jan 20 19:03:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:15 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2624241486' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"}]': finished
Jan 20 19:03:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Jan 20 19:03:15 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Jan 20 19:03:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:15 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:15 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:15 compute-0 lvm[82488]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:03:15 compute-0 lvm[82488]: VG ceph_vg0 finished
Jan 20 19:03:15 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Jan 20 19:03:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 20 19:03:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3025274123' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 19:03:16 compute-0 condescending_blackburn[82396]:  stderr: got monmap epoch 1
Jan 20 19:03:16 compute-0 condescending_blackburn[82396]: --> Creating keyring file for osd.0
Jan 20 19:03:16 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Jan 20 19:03:16 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Jan 20 19:03:16 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid ea83dc26-7f71-429f-b9c1-f87c51d6aebb --setuser ceph --setgroup ceph
Jan 20 19:03:16 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:16 compute-0 ceph-mon[75120]: pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:16 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2624241486' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"} : dispatch
Jan 20 19:03:16 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2624241486' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb"}]': finished
Jan 20 19:03:16 compute-0 ceph-mon[75120]: osdmap e4: 1 total, 0 up, 1 in
Jan 20 19:03:16 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:16 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3025274123' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]:  stderr: 2026-01-20T19:03:16.357+0000 7fdacccdf8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]:  stderr: 2026-01-20T19:03:16.381+0000 7fdacccdf8c0 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:17 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new aba2c458-fbc4-4039-bc23-d828faa8f69c
Jan 20 19:03:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:17 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 20 19:03:17 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 19:03:17 compute-0 ceph-mon[75120]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Jan 20 19:03:17 compute-0 ceph-mon[75120]: Cluster is now healthy
Jan 20 19:03:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"} v 0)
Jan 20 19:03:17 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1217177961' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"} : dispatch
Jan 20 19:03:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Jan 20 19:03:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:17 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1217177961' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"}]': finished
Jan 20 19:03:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Jan 20 19:03:17 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Jan 20 19:03:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:17 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:17 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:17 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:17 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:18 compute-0 lvm[83424]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:03:18 compute-0 lvm[83424]: VG ceph_vg1 finished
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Jan 20 19:03:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:18 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 20 19:03:18 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3401551903' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]:  stderr: got monmap epoch 1
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: --> Creating keyring file for osd.1
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Jan 20 19:03:18 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid aba2c458-fbc4-4039-bc23-d828faa8f69c --setuser ceph --setgroup ceph
Jan 20 19:03:18 compute-0 ceph-mon[75120]: pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1217177961' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"} : dispatch
Jan 20 19:03:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1217177961' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "aba2c458-fbc4-4039-bc23-d828faa8f69c"}]': finished
Jan 20 19:03:18 compute-0 ceph-mon[75120]: osdmap e5: 2 total, 0 up, 2 in
Jan 20 19:03:18 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:18 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3401551903' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 19:03:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:19 compute-0 ceph-mon[75120]: pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:19 compute-0 condescending_blackburn[82396]:  stderr: 2026-01-20T19:03:18.829+0000 7fb2b71d58c0 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) No valid bdev label found
Jan 20 19:03:19 compute-0 condescending_blackburn[82396]:  stderr: 2026-01-20T19:03:18.856+0000 7fb2b71d58c0 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Jan 20 19:03:19 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new f12cccca-abeb-4720-98f5-dcecf6096427
Jan 20 19:03:20 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"} v 0)
Jan 20 19:03:20 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3657180307' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"} : dispatch
Jan 20 19:03:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Jan 20 19:03:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:20 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3657180307' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"}]': finished
Jan 20 19:03:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Jan 20 19:03:20 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Jan 20 19:03:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:20 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:20 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:20 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:20 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:20 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:20 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:20 compute-0 lvm[84362]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:03:20 compute-0 lvm[84362]: VG ceph_vg2 finished
Jan 20 19:03:20 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3657180307' entity='client.bootstrap-osd' cmd={"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"} : dispatch
Jan 20 19:03:20 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3657180307' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "f12cccca-abeb-4720-98f5-dcecf6096427"}]': finished
Jan 20 19:03:20 compute-0 ceph-mon[75120]: osdmap e6: 3 total, 0 up, 3 in
Jan 20 19:03:20 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:20 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:20 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:20 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Jan 20 19:03:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Jan 20 19:03:21 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1513095021' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 19:03:21 compute-0 condescending_blackburn[82396]:  stderr: got monmap epoch 1
Jan 20 19:03:21 compute-0 condescending_blackburn[82396]: --> Creating keyring file for osd.2
Jan 20 19:03:21 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Jan 20 19:03:21 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Jan 20 19:03:21 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid f12cccca-abeb-4720-98f5-dcecf6096427 --setuser ceph --setgroup ceph
Jan 20 19:03:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:21 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1513095021' entity='client.bootstrap-osd' cmd={"prefix": "mon getmap"} : dispatch
Jan 20 19:03:21 compute-0 ceph-mon[75120]: pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:22 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]:  stderr: 2026-01-20T19:03:21.743+0000 7f90a125c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) No valid bdev label found
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]:  stderr: 2026-01-20T19:03:21.769+0000 7f90a125c8c0 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 20 19:03:23 compute-0 condescending_blackburn[82396]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Jan 20 19:03:23 compute-0 systemd[1]: libpod-0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960.scope: Deactivated successfully.
Jan 20 19:03:23 compute-0 systemd[1]: libpod-0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960.scope: Consumed 6.115s CPU time.
Jan 20 19:03:23 compute-0 ceph-mon[75120]: pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:23 compute-0 podman[85275]: 2026-01-20 19:03:23.814185221 +0000 UTC m=+0.033674743 container died 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:03:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-847f4d6702ca8481e19ff2dc1fdf4673dc737d0633bb720e7520c2baf4827ca8-merged.mount: Deactivated successfully.
Jan 20 19:03:23 compute-0 podman[85275]: 2026-01-20 19:03:23.860589896 +0000 UTC m=+0.080079308 container remove 0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 19:03:23 compute-0 systemd[1]: libpod-conmon-0f87627db66379061d19ecbbc4b633546ad08eec8fe99b2ffe57da60f49a9960.scope: Deactivated successfully.
Jan 20 19:03:23 compute-0 sudo[82305]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:24 compute-0 sudo[85291]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:24 compute-0 sudo[85291]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:24 compute-0 sudo[85291]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:24 compute-0 sudo[85316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:03:24 compute-0 sudo[85316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:24 compute-0 podman[85353]: 2026-01-20 19:03:24.323637806 +0000 UTC m=+0.020146220 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:24 compute-0 podman[85353]: 2026-01-20 19:03:24.43964534 +0000 UTC m=+0.136153734 container create 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:24 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:24 compute-0 systemd[1]: Started libpod-conmon-0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1.scope.
Jan 20 19:03:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:24 compute-0 podman[85353]: 2026-01-20 19:03:24.686076979 +0000 UTC m=+0.382585393 container init 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:03:24 compute-0 podman[85353]: 2026-01-20 19:03:24.693625549 +0000 UTC m=+0.390133943 container start 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:24 compute-0 podman[85353]: 2026-01-20 19:03:24.697134873 +0000 UTC m=+0.393643267 container attach 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:03:24 compute-0 musing_ardinghelli[85370]: 167 167
Jan 20 19:03:24 compute-0 systemd[1]: libpod-0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1.scope: Deactivated successfully.
Jan 20 19:03:24 compute-0 podman[85353]: 2026-01-20 19:03:24.699435158 +0000 UTC m=+0.395943552 container died 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-3c7e31ecc7fe9322495a536cbe225bf310f197c59ba4531a404162273a8dfff1-merged.mount: Deactivated successfully.
Jan 20 19:03:25 compute-0 podman[85353]: 2026-01-20 19:03:25.208424322 +0000 UTC m=+0.904932716 container remove 0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_ardinghelli, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:25 compute-0 systemd[1]: libpod-conmon-0c20d1431963bb9a3fc75468688f6be7021cb0cdef95a7caea6d6206644345f1.scope: Deactivated successfully.
Jan 20 19:03:25 compute-0 podman[85394]: 2026-01-20 19:03:25.354194605 +0000 UTC m=+0.027424094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:25 compute-0 podman[85394]: 2026-01-20 19:03:25.771326981 +0000 UTC m=+0.444556430 container create cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:03:25 compute-0 systemd[1]: Started libpod-conmon-cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c.scope.
Jan 20 19:03:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:25 compute-0 ceph-mon[75120]: pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:25 compute-0 podman[85394]: 2026-01-20 19:03:25.877707205 +0000 UTC m=+0.550936664 container init cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:25 compute-0 podman[85394]: 2026-01-20 19:03:25.885506371 +0000 UTC m=+0.558735810 container start cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 20 19:03:25 compute-0 podman[85394]: 2026-01-20 19:03:25.889730912 +0000 UTC m=+0.562960361 container attach cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:03:26 compute-0 beautiful_noether[85411]: {
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:     "0": [
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:         {
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "devices": [
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "/dev/loop3"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             ],
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_name": "ceph_lv0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_size": "21470642176",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "name": "ceph_lv0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "tags": {
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cluster_name": "ceph",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.crush_device_class": "",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.encrypted": "0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.objectstore": "bluestore",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osd_id": "0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.type": "block",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.vdo": "0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.with_tpm": "0"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             },
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "type": "block",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "vg_name": "ceph_vg0"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:         }
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:     ],
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:     "1": [
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:         {
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "devices": [
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "/dev/loop4"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             ],
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_name": "ceph_lv1",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_size": "21470642176",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "name": "ceph_lv1",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "tags": {
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cluster_name": "ceph",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.crush_device_class": "",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.encrypted": "0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.objectstore": "bluestore",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osd_id": "1",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.type": "block",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.vdo": "0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.with_tpm": "0"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             },
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "type": "block",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "vg_name": "ceph_vg1"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:         }
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:     ],
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:     "2": [
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:         {
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "devices": [
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "/dev/loop5"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             ],
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_name": "ceph_lv2",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_size": "21470642176",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "name": "ceph_lv2",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "tags": {
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.cluster_name": "ceph",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.crush_device_class": "",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.encrypted": "0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.objectstore": "bluestore",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osd_id": "2",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.type": "block",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.vdo": "0",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:                 "ceph.with_tpm": "0"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             },
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "type": "block",
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:             "vg_name": "ceph_vg2"
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:         }
Jan 20 19:03:26 compute-0 beautiful_noether[85411]:     ]
Jan 20 19:03:26 compute-0 beautiful_noether[85411]: }
Jan 20 19:03:26 compute-0 systemd[1]: libpod-cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c.scope: Deactivated successfully.
Jan 20 19:03:26 compute-0 podman[85394]: 2026-01-20 19:03:26.183629353 +0000 UTC m=+0.856858802 container died cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:03:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ada9cbd9c0db51dd2449e0edc47b65dbb5f6da8fa76a1354b8d7061f9d29ddc-merged.mount: Deactivated successfully.
Jan 20 19:03:26 compute-0 podman[85394]: 2026-01-20 19:03:26.232822874 +0000 UTC m=+0.906052333 container remove cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=beautiful_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:03:26 compute-0 systemd[1]: libpod-conmon-cb29dc36d2251f41a8a0b58b388ed43b664f75602641c03bd8e3b52b0ad8cb4c.scope: Deactivated successfully.
Jan 20 19:03:26 compute-0 sudo[85316]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Jan 20 19:03:26 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 20 19:03:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:26 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:26 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Jan 20 19:03:26 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Jan 20 19:03:26 compute-0 sudo[85431]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:26 compute-0 sudo[85431]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:26 compute-0 sudo[85431]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:26 compute-0 sudo[85456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:03:26 compute-0 sudo[85456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:26 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:26 compute-0 podman[85521]: 2026-01-20 19:03:26.780890279 +0000 UTC m=+0.022642540 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:26 compute-0 podman[85521]: 2026-01-20 19:03:26.969524513 +0000 UTC m=+0.211276694 container create c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:03:26 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.0"} : dispatch
Jan 20 19:03:26 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:26 compute-0 ceph-mon[75120]: Deploying daemon osd.0 on compute-0
Jan 20 19:03:27 compute-0 systemd[1]: Started libpod-conmon-c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a.scope.
Jan 20 19:03:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:27 compute-0 podman[85521]: 2026-01-20 19:03:27.083970329 +0000 UTC m=+0.325722500 container init c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 19:03:27 compute-0 podman[85521]: 2026-01-20 19:03:27.092475342 +0000 UTC m=+0.334227553 container start c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:03:27 compute-0 kind_faraday[85537]: 167 167
Jan 20 19:03:27 compute-0 systemd[1]: libpod-c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a.scope: Deactivated successfully.
Jan 20 19:03:27 compute-0 conmon[85537]: conmon c368b7601c1575da914b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a.scope/container/memory.events
Jan 20 19:03:27 compute-0 podman[85521]: 2026-01-20 19:03:27.108577035 +0000 UTC m=+0.350329306 container attach c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:27 compute-0 podman[85521]: 2026-01-20 19:03:27.109574809 +0000 UTC m=+0.351327000 container died c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 19:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-414c3733188c5fd091018392748bea4279236703ccb1f57eea07d10045f47f40-merged.mount: Deactivated successfully.
Jan 20 19:03:27 compute-0 podman[85521]: 2026-01-20 19:03:27.19945991 +0000 UTC m=+0.441212101 container remove c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:03:27 compute-0 systemd[1]: libpod-conmon-c368b7601c1575da914be438ff26011e6fa79bfeff8f079e396d211053dc043a.scope: Deactivated successfully.
Jan 20 19:03:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:27 compute-0 podman[85567]: 2026-01-20 19:03:27.530989217 +0000 UTC m=+0.093066158 container create b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:27 compute-0 podman[85567]: 2026-01-20 19:03:27.466346227 +0000 UTC m=+0.028423208 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:27 compute-0 systemd[1]: Started libpod-conmon-b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb.scope.
Jan 20 19:03:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:27 compute-0 podman[85567]: 2026-01-20 19:03:27.620267074 +0000 UTC m=+0.182344045 container init b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:27 compute-0 podman[85567]: 2026-01-20 19:03:27.633167011 +0000 UTC m=+0.195243952 container start b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:03:27 compute-0 podman[85567]: 2026-01-20 19:03:27.661307311 +0000 UTC m=+0.223384282 container attach b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 19:03:27 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test[85583]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 20 19:03:27 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test[85583]:                             [--no-systemd] [--no-tmpfs]
Jan 20 19:03:27 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test[85583]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 19:03:27 compute-0 systemd[1]: libpod-b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb.scope: Deactivated successfully.
Jan 20 19:03:27 compute-0 podman[85567]: 2026-01-20 19:03:27.829755163 +0000 UTC m=+0.391832094 container died b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True)
Jan 20 19:03:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-3cd54683150588e81ba1f34dbe6857f5167211497eabc141353d0f99452e7d37-merged.mount: Deactivated successfully.
Jan 20 19:03:27 compute-0 ceph-mon[75120]: pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:28 compute-0 podman[85567]: 2026-01-20 19:03:28.043848573 +0000 UTC m=+0.605925514 container remove b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate-test, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:28 compute-0 systemd[1]: libpod-conmon-b72a69593fb13dd6419a4d3683870563a021c89e563dfa33a96ed59f8b1280eb.scope: Deactivated successfully.
Jan 20 19:03:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:28 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:28 compute-0 systemd[1]: Reloading.
Jan 20 19:03:28 compute-0 systemd-sysv-generator[85649]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:28 compute-0 systemd-rc-local-generator[85645]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:28 compute-0 systemd[1]: Reloading.
Jan 20 19:03:28 compute-0 systemd-rc-local-generator[85685]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:28 compute-0 systemd-sysv-generator[85688]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:29 compute-0 systemd[1]: Starting Ceph osd.0 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:03:29 compute-0 podman[85744]: 2026-01-20 19:03:29.439505629 +0000 UTC m=+0.057534442 container create 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:29 compute-0 podman[85744]: 2026-01-20 19:03:29.405433317 +0000 UTC m=+0.023462180 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:29 compute-0 podman[85744]: 2026-01-20 19:03:29.556506776 +0000 UTC m=+0.174535609 container init 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:03:29 compute-0 podman[85744]: 2026-01-20 19:03:29.571971004 +0000 UTC m=+0.189999857 container start 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:29 compute-0 podman[85744]: 2026-01-20 19:03:29.576451181 +0000 UTC m=+0.194480024 container attach 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:29 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:29 compute-0 bash[85744]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:29 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:29 compute-0 bash[85744]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:30 compute-0 lvm[85844]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:03:30 compute-0 lvm[85844]: VG ceph_vg0 finished
Jan 20 19:03:30 compute-0 lvm[85847]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:03:30 compute-0 lvm[85847]: VG ceph_vg1 finished
Jan 20 19:03:30 compute-0 lvm[85849]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:03:30 compute-0 lvm[85849]: VG ceph_vg2 finished
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:30 compute-0 bash[85744]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 19:03:30 compute-0 bash[85744]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:30 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:30 compute-0 bash[85744]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 19:03:30 compute-0 bash[85744]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 19:03:30 compute-0 bash[85744]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Jan 20 19:03:30 compute-0 ceph-mon[75120]: pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:30 compute-0 bash[85744]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:30 compute-0 bash[85744]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 19:03:30 compute-0 bash[85744]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 19:03:30 compute-0 bash[85744]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Jan 20 19:03:30 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate[85760]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 19:03:30 compute-0 bash[85744]: --> ceph-volume lvm activate successful for osd ID: 0
Jan 20 19:03:30 compute-0 systemd[1]: libpod-2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51.scope: Deactivated successfully.
Jan 20 19:03:30 compute-0 podman[85744]: 2026-01-20 19:03:30.674838775 +0000 UTC m=+1.292867588 container died 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:30 compute-0 systemd[1]: libpod-2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51.scope: Consumed 1.567s CPU time.
Jan 20 19:03:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0b649009381b7c89d830c47136ee43eb39991aabd5b6a616c898e724b3d42e7-merged.mount: Deactivated successfully.
Jan 20 19:03:30 compute-0 podman[85744]: 2026-01-20 19:03:30.716136629 +0000 UTC m=+1.334165442 container remove 2d18199a048d63582816e13ede9ac65845a43a44c1f430156d20bbbc0162ae51 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0-activate, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:30 compute-0 podman[86002]: 2026-01-20 19:03:30.928753784 +0000 UTC m=+0.039331429 container create eabc59bf78c29281caec780e2f63d21f2c1631016579501e797d66320f85da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd0ae021064ebbf93c693f40265c567a70e199fdba77b42383da98125b0de47a/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:30 compute-0 podman[86002]: 2026-01-20 19:03:30.983868736 +0000 UTC m=+0.094446401 container init eabc59bf78c29281caec780e2f63d21f2c1631016579501e797d66320f85da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:30 compute-0 podman[86002]: 2026-01-20 19:03:30.994183232 +0000 UTC m=+0.104760877 container start eabc59bf78c29281caec780e2f63d21f2c1631016579501e797d66320f85da8d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 19:03:30 compute-0 bash[86002]: eabc59bf78c29281caec780e2f63d21f2c1631016579501e797d66320f85da8d
Jan 20 19:03:30 compute-0 podman[86002]: 2026-01-20 19:03:30.911279357 +0000 UTC m=+0.021857032 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:31 compute-0 systemd[1]: Started Ceph osd.0 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:03:31 compute-0 ceph-osd[86022]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: pidfile_write: ignore empty --pid-file
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 sudo[85456]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:31 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Jan 20 19:03:31 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 20 19:03:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:31 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Jan 20 19:03:31 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 sudo[86036]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:31 compute-0 sudo[86036]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 sudo[86036]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2400 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 sudo[86067]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a2000 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 sudo[86067]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Jan 20 19:03:31 compute-0 ceph-osd[86022]: load: jerasure load: lrc 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 ceph-osd[86022]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 19:03:31 compute-0 ceph-osd[86022]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x5614277a3c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount shared_bdev_used = 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: RocksDB version: 7.9.2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Git sha 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: DB SUMMARY
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: DB Session ID:  2LYYGZSRKWTX2JVYO344
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: CURRENT file:  CURRENT
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                         Options.error_if_exists: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.create_if_missing: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                                     Options.env: 0x561427633ea0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                                Options.info_log: 0x5614286848a0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                              Options.statistics: (nil)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.use_fsync: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                              Options.db_log_dir: 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.write_buffer_manager: 0x561427698b40
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.unordered_write: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.row_cache: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                              Options.wal_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.two_write_queues: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.wal_compression: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.atomic_flush: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.max_background_jobs: 4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.max_background_compactions: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.max_subcompactions: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.max_open_files: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Compression algorithms supported:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kZSTD supported: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kXpressCompression supported: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kBZip2Compression supported: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kLZ4Compression supported: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kZlibCompression supported: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kSnappyCompression supported: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276378d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276378d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276378d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276378d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276378d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276378d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276378d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428684c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d9c11cee-4e1e-4d55-b52b-c650acf03792
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811464469, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811466545, "job": 1, "event": "recovery_finished"}
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: freelist init
Jan 20 19:03:31 compute-0 ceph-osd[86022]: freelist _read_cfg
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs umount
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) close
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bdev(0x561428439800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluefs mount shared_bdev_used = 27262976
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: RocksDB version: 7.9.2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Git sha 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: DB SUMMARY
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: DB Session ID:  2LYYGZSRKWTX2JVYO345
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: CURRENT file:  CURRENT
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                         Options.error_if_exists: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.create_if_missing: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                                     Options.env: 0x561427633ce0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                                Options.info_log: 0x5614287112a0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                              Options.statistics: (nil)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.use_fsync: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                              Options.db_log_dir: 
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.write_buffer_manager: 0x561427699900
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.unordered_write: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.row_cache: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                              Options.wal_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.two_write_queues: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.wal_compression: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.atomic_flush: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.max_background_jobs: 4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.max_background_compactions: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.max_subcompactions: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.max_open_files: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Compression algorithms supported:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kZSTD supported: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kXpressCompression supported: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kBZip2Compression supported: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kLZ4Compression supported: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kZlibCompression supported: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         kSnappyCompression supported: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ce0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x561427637a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ee0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276374b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ee0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276374b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x561428685ee0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614276374b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d9c11cee-4e1e-4d55-b52b-c650acf03792
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811505434, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811509961, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935811, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9c11cee-4e1e-4d55-b52b-c650acf03792", "db_session_id": "2LYYGZSRKWTX2JVYO345", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:03:31
Jan 20 19:03:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:03:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:03:31 compute-0 ceph-mgr[75417]: [balancer INFO root] No pools available
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811513957, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935811, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9c11cee-4e1e-4d55-b52b-c650acf03792", "db_session_id": "2LYYGZSRKWTX2JVYO345", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811517685, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935811, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d9c11cee-4e1e-4d55-b52b-c650acf03792", "db_session_id": "2LYYGZSRKWTX2JVYO345", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935811519142, "job": 1, "event": "recovery_finished"}
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56142888dc00
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: DB pointer 0x56142883e000
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Jan 20 19:03:31 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:03:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 7e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.0 total, 0.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 460.80 MB usage: 0.94 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(2,0.72 KB,0.000152323%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:03:31 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 19:03:31 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 19:03:31 compute-0 ceph-osd[86022]: _get_class not permitted to load lua
Jan 20 19:03:31 compute-0 ceph-osd[86022]: _get_class not permitted to load sdk
Jan 20 19:03:31 compute-0 ceph-osd[86022]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 19:03:31 compute-0 ceph-osd[86022]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 19:03:31 compute-0 ceph-osd[86022]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 19:03:31 compute-0 ceph-osd[86022]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 19:03:31 compute-0 ceph-osd[86022]: osd.0 0 load_pgs
Jan 20 19:03:31 compute-0 ceph-osd[86022]: osd.0 0 load_pgs opened 0 pgs
Jan 20 19:03:31 compute-0 ceph-osd[86022]: osd.0 0 log_to_monitors true
Jan 20 19:03:31 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0[86018]: 2026-01-20T19:03:31.544+0000 7f1d61cfc8c0 -1 osd.0 0 log_to_monitors true
Jan 20 19:03:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Jan 20 19:03:31 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 20 19:03:31 compute-0 podman[86533]: 2026-01-20 19:03:31.581267847 +0000 UTC m=+0.040239790 container create 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:03:31 compute-0 systemd[1]: Started libpod-conmon-3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541.scope.
Jan 20 19:03:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:31 compute-0 podman[86533]: 2026-01-20 19:03:31.562805746 +0000 UTC m=+0.021777679 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:31 compute-0 podman[86533]: 2026-01-20 19:03:31.663441924 +0000 UTC m=+0.122413907 container init 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:03:31 compute-0 podman[86533]: 2026-01-20 19:03:31.670332198 +0000 UTC m=+0.129304141 container start 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:31 compute-0 podman[86533]: 2026-01-20 19:03:31.67377642 +0000 UTC m=+0.132748373 container attach 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:31 compute-0 charming_almeida[86582]: 167 167
Jan 20 19:03:31 compute-0 systemd[1]: libpod-3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541.scope: Deactivated successfully.
Jan 20 19:03:31 compute-0 conmon[86582]: conmon 3fea38f4e66fd6846bd3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541.scope/container/memory.events
Jan 20 19:03:31 compute-0 podman[86533]: 2026-01-20 19:03:31.677090799 +0000 UTC m=+0.136062762 container died 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-dbe96ada11116fce507484985b464bb155f669caf4050d9072b6084a80d3a4b1-merged.mount: Deactivated successfully.
Jan 20 19:03:31 compute-0 podman[86533]: 2026-01-20 19:03:31.714711955 +0000 UTC m=+0.173683878 container remove 3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_almeida, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:31 compute-0 systemd[1]: libpod-conmon-3fea38f4e66fd6846bd3a7dad6866eefb12307889c11bd077becc1dd2a61e541.scope: Deactivated successfully.
Jan 20 19:03:31 compute-0 podman[86612]: 2026-01-20 19:03:31.98150188 +0000 UTC m=+0.061162127 container create 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:03:32 compute-0 systemd[1]: Started libpod-conmon-19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31.scope.
Jan 20 19:03:32 compute-0 podman[86612]: 2026-01-20 19:03:31.960128271 +0000 UTC m=+0.039788538 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:32 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:32 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:32 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.1"} : dispatch
Jan 20 19:03:32 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:32 compute-0 ceph-mon[75120]: Deploying daemon osd.1 on compute-0
Jan 20 19:03:32 compute-0 ceph-mon[75120]: pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:32 compute-0 ceph-mon[75120]: from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} : dispatch
Jan 20 19:03:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Jan 20 19:03:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:32 compute-0 podman[86612]: 2026-01-20 19:03:32.093189111 +0000 UTC m=+0.172849368 container init 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:32 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 20 19:03:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Jan 20 19:03:32 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Jan 20 19:03:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:32 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 20 19:03:32 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 19:03:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 20 19:03:32 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:32 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:32 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:32 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:32 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:32 compute-0 podman[86612]: 2026-01-20 19:03:32.106711593 +0000 UTC m=+0.186371860 container start 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:32 compute-0 podman[86612]: 2026-01-20 19:03:32.111674421 +0000 UTC m=+0.191334668 container attach 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 19:03:32 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test[86628]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 20 19:03:32 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test[86628]:                             [--no-systemd] [--no-tmpfs]
Jan 20 19:03:32 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test[86628]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 19:03:32 compute-0 systemd[1]: libpod-19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31.scope: Deactivated successfully.
Jan 20 19:03:32 compute-0 podman[86612]: 2026-01-20 19:03:32.330510984 +0000 UTC m=+0.410171221 container died 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d1541753737d7f915f98648f4d711f5c9f4844419edd9cc25ae59cb1d01365f-merged.mount: Deactivated successfully.
Jan 20 19:03:32 compute-0 podman[86612]: 2026-01-20 19:03:32.380760011 +0000 UTC m=+0.460420248 container remove 19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:03:32 compute-0 systemd[1]: libpod-conmon-19bbeb4afe2d49dd5db0daf9ec1b058d7382a3d225f795fa788a5f4d4004fc31.scope: Deactivated successfully.
Jan 20 19:03:32 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:32 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 19:03:32 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 19:03:32 compute-0 systemd[1]: Reloading.
Jan 20 19:03:32 compute-0 systemd-sysv-generator[86695]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:32 compute-0 systemd-rc-local-generator[86690]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:32 compute-0 systemd[1]: Reloading.
Jan 20 19:03:33 compute-0 systemd-rc-local-generator[86729]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:33 compute-0 systemd-sysv-generator[86734]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Jan 20 19:03:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 19:03:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Jan 20 19:03:33 compute-0 ceph-osd[86022]: osd.0 0 done with init, starting boot process
Jan 20 19:03:33 compute-0 ceph-osd[86022]: osd.0 0 start_boot
Jan 20 19:03:33 compute-0 ceph-osd[86022]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 19:03:33 compute-0 ceph-osd[86022]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 19:03:33 compute-0 ceph-osd[86022]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 19:03:33 compute-0 ceph-osd[86022]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 19:03:33 compute-0 ceph-osd[86022]: osd.0 0  bench count 12288000 bsize 4 KiB
Jan 20 19:03:33 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Jan 20 19:03:33 compute-0 ceph-mon[75120]: from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Jan 20 19:03:33 compute-0 ceph-mon[75120]: osdmap e7: 3 total, 0 up, 3 in
Jan 20 19:03:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:33 compute-0 ceph-mon[75120]: from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd={"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 19:03:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:33 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:33 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:33 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:33 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 19:03:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:33 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:33 compute-0 systemd[1]: Starting Ceph osd.1 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:03:33 compute-0 podman[86789]: 2026-01-20 19:03:33.417248911 +0000 UTC m=+0.040696731 container create d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:03:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e8 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:33 compute-0 podman[86789]: 2026-01-20 19:03:33.401474675 +0000 UTC m=+0.024922545 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:33 compute-0 podman[86789]: 2026-01-20 19:03:33.522348834 +0000 UTC m=+0.145796684 container init d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:33 compute-0 podman[86789]: 2026-01-20 19:03:33.527384074 +0000 UTC m=+0.150831894 container start d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:33 compute-0 podman[86789]: 2026-01-20 19:03:33.547884012 +0000 UTC m=+0.171331832 container attach d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:03:33 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:33 compute-0 bash[86789]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:33 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:33 compute-0 bash[86789]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 19:03:34 compute-0 ceph-mon[75120]: from='osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 19:03:34 compute-0 ceph-mon[75120]: osdmap e8: 3 total, 0 up, 3 in
Jan 20 19:03:34 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:34 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:34 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:34 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:34 compute-0 ceph-mon[75120]: pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:34 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:34 compute-0 lvm[86890]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:03:34 compute-0 lvm[86890]: VG ceph_vg1 finished
Jan 20 19:03:34 compute-0 lvm[86889]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:03:34 compute-0 lvm[86889]: VG ceph_vg0 finished
Jan 20 19:03:34 compute-0 lvm[86892]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:03:34 compute-0 lvm[86892]: VG ceph_vg2 finished
Jan 20 19:03:34 compute-0 lvm[86893]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:03:34 compute-0 lvm[86893]: VG ceph_vg0 finished
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:34 compute-0 bash[86789]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 19:03:34 compute-0 bash[86789]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:34 compute-0 bash[86789]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:03:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 19:03:34 compute-0 bash[86789]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 20 19:03:34 compute-0 bash[86789]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:34 compute-0 bash[86789]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:34 compute-0 bash[86789]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 20 19:03:34 compute-0 bash[86789]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 19:03:34 compute-0 bash[86789]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Jan 20 19:03:34 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate[86804]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 20 19:03:34 compute-0 bash[86789]: --> ceph-volume lvm activate successful for osd ID: 1
Jan 20 19:03:34 compute-0 systemd[1]: libpod-d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb.scope: Deactivated successfully.
Jan 20 19:03:34 compute-0 systemd[1]: libpod-d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb.scope: Consumed 1.547s CPU time.
Jan 20 19:03:34 compute-0 podman[86989]: 2026-01-20 19:03:34.662209866 +0000 UTC m=+0.025217522 container died d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 19:03:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-adb3eb22ae615101be07b8bf03828b568dd78268b86437740945116deb3cf652-merged.mount: Deactivated successfully.
Jan 20 19:03:34 compute-0 podman[86989]: 2026-01-20 19:03:34.879307817 +0000 UTC m=+0.242315473 container remove d07f8262fbb4a44d7004543c7e31992546a36d037cf16c0966f9c3d954defdcb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1-activate, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:03:35 compute-0 podman[87051]: 2026-01-20 19:03:35.08262245 +0000 UTC m=+0.038721302 container create bfb3a392dadbfba129a0ec858cdb44a48baac2ff8e51790a73dd61828541b643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:35 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 19:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7094eb738495fead110abbc7629a6438fa8d35f6d2b2f6ae1fdaf2ffdb08080f/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:35 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:35 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:35 compute-0 ceph-mon[75120]: purged_snaps scrub starts
Jan 20 19:03:35 compute-0 ceph-mon[75120]: purged_snaps scrub ok
Jan 20 19:03:35 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:35 compute-0 podman[87051]: 2026-01-20 19:03:35.067719286 +0000 UTC m=+0.023818168 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:35 compute-0 podman[87051]: 2026-01-20 19:03:35.190690655 +0000 UTC m=+0.146789597 container init bfb3a392dadbfba129a0ec858cdb44a48baac2ff8e51790a73dd61828541b643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:03:35 compute-0 podman[87051]: 2026-01-20 19:03:35.20052904 +0000 UTC m=+0.156627932 container start bfb3a392dadbfba129a0ec858cdb44a48baac2ff8e51790a73dd61828541b643 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 20 19:03:35 compute-0 bash[87051]: bfb3a392dadbfba129a0ec858cdb44a48baac2ff8e51790a73dd61828541b643
Jan 20 19:03:35 compute-0 systemd[1]: Started Ceph osd.1 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:03:35 compute-0 ceph-osd[87071]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: pidfile_write: ignore empty --pid-file
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 sudo[86067]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Jan 20 19:03:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 20 19:03:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:35 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:35 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Jan 20 19:03:35 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8400 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea8000 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 sudo[87091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:35 compute-0 ceph-osd[87071]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Jan 20 19:03:35 compute-0 sudo[87091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:35 compute-0 sudo[87091]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:35 compute-0 ceph-osd[87071]: load: jerasure load: lrc 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 sudo[87128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:03:35 compute-0 sudo[87128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 19:03:35 compute-0 ceph-osd[87071]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d8ea9c00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount shared_bdev_used = 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: RocksDB version: 7.9.2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Git sha 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: DB SUMMARY
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: DB Session ID:  BJ7CSLXC1OMZX8UVKFMI
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: CURRENT file:  CURRENT
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                         Options.error_if_exists: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.create_if_missing: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                                     Options.env: 0x5614d8d39ea0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                                Options.info_log: 0x5614d9d8a8a0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                              Options.statistics: (nil)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.use_fsync: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                              Options.db_log_dir: 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.write_buffer_manager: 0x5614d8d9eb40
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.unordered_write: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.row_cache: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                              Options.wal_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.two_write_queues: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.wal_compression: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.atomic_flush: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.max_background_jobs: 4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.max_background_compactions: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.max_subcompactions: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.max_open_files: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Compression algorithms supported:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kZSTD supported: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kXpressCompression supported: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kBZip2Compression supported: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kLZ4Compression supported: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kZlibCompression supported: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kSnappyCompression supported: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d8d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8ac80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 42fb52ca-080c-4b0c-8916-488ff4bd7976
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815612736, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815614231, "job": 1, "event": "recovery_finished"}
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: freelist init
Jan 20 19:03:35 compute-0 ceph-osd[87071]: freelist _read_cfg
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs umount
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) close
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bdev(0x5614d9b3f800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluefs mount shared_bdev_used = 27262976
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: RocksDB version: 7.9.2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Git sha 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: DB SUMMARY
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: DB Session ID:  BJ7CSLXC1OMZX8UVKFMJ
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: CURRENT file:  CURRENT
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                         Options.error_if_exists: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.create_if_missing: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                                     Options.env: 0x5614d8d39d50
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                                Options.info_log: 0x5614d9d8baa0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                              Options.statistics: (nil)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.use_fsync: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                              Options.db_log_dir: 
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.write_buffer_manager: 0x5614d8d9f900
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.unordered_write: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.row_cache: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                              Options.wal_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.two_write_queues: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.wal_compression: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.atomic_flush: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.max_background_jobs: 4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.max_background_compactions: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.max_subcompactions: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.max_open_files: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Compression algorithms supported:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kZSTD supported: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kXpressCompression supported: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kBZip2Compression supported: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kLZ4Compression supported: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kZlibCompression supported: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         kSnappyCompression supported: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bea0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3da30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5614d9d8bec0)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5614d8d3d4b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 42fb52ca-080c-4b0c-8916-488ff4bd7976
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815668202, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815684107, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42fb52ca-080c-4b0c-8916-488ff4bd7976", "db_session_id": "BJ7CSLXC1OMZX8UVKFMJ", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815713144, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42fb52ca-080c-4b0c-8916-488ff4bd7976", "db_session_id": "BJ7CSLXC1OMZX8UVKFMJ", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815716558, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935815, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "42fb52ca-080c-4b0c-8916-488ff4bd7976", "db_session_id": "BJ7CSLXC1OMZX8UVKFMJ", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935815734088, "job": 1, "event": "recovery_finished"}
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5614d9f93c00
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: DB pointer 0x5614d9f44000
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Jan 20 19:03:35 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:03:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 4.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:03:35 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 19:03:35 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 19:03:35 compute-0 ceph-osd[87071]: _get_class not permitted to load lua
Jan 20 19:03:35 compute-0 ceph-osd[87071]: _get_class not permitted to load sdk
Jan 20 19:03:35 compute-0 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 19:03:35 compute-0 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 19:03:35 compute-0 ceph-osd[87071]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 19:03:35 compute-0 ceph-osd[87071]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 19:03:35 compute-0 ceph-osd[87071]: osd.1 0 load_pgs
Jan 20 19:03:35 compute-0 ceph-osd[87071]: osd.1 0 load_pgs opened 0 pgs
Jan 20 19:03:35 compute-0 ceph-osd[87071]: osd.1 0 log_to_monitors true
Jan 20 19:03:35 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1[87067]: 2026-01-20T19:03:35.802+0000 7f47b91368c0 -1 osd.1 0 log_to_monitors true
Jan 20 19:03:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Jan 20 19:03:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 20 19:03:35 compute-0 podman[87583]: 2026-01-20 19:03:35.832503353 +0000 UTC m=+0.020659133 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:36 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:36 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:36 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:36 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 19:03:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "osd.2"} : dispatch
Jan 20 19:03:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:36 compute-0 ceph-mon[75120]: Deploying daemon osd.2 on compute-0
Jan 20 19:03:36 compute-0 ceph-mon[75120]: pgmap v31: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 19:03:36 compute-0 ceph-mon[75120]: from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} : dispatch
Jan 20 19:03:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:36 compute-0 podman[87583]: 2026-01-20 19:03:36.928001508 +0000 UTC m=+1.116157268 container create 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:03:36 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Jan 20 19:03:36 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 20 19:03:36 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:36 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:36 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:36 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:36 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:36 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:36 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:36 compute-0 systemd[1]: Started libpod-conmon-759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb.scope.
Jan 20 19:03:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:37 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 19:03:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:37 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:37 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:37 compute-0 podman[87583]: 2026-01-20 19:03:37.453912635 +0000 UTC m=+1.642068415 container init 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:03:37 compute-0 podman[87583]: 2026-01-20 19:03:37.466436274 +0000 UTC m=+1.654592034 container start 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:03:37 compute-0 youthful_carver[87625]: 167 167
Jan 20 19:03:37 compute-0 systemd[1]: libpod-759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb.scope: Deactivated successfully.
Jan 20 19:03:37 compute-0 conmon[87625]: conmon 759595342d1e8edd5b4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb.scope/container/memory.events
Jan 20 19:03:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:37 compute-0 podman[87583]: 2026-01-20 19:03:37.576878155 +0000 UTC m=+1.765033925 container attach 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:37 compute-0 podman[87583]: 2026-01-20 19:03:37.577688064 +0000 UTC m=+1.765843824 container died 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-000ba12a3bf90d09ed4658d6187b68efae3a305154836583e30f67f7205d7208-merged.mount: Deactivated successfully.
Jan 20 19:03:37 compute-0 podman[87583]: 2026-01-20 19:03:37.711300527 +0000 UTC m=+1.899456287 container remove 759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 19:03:37 compute-0 systemd[1]: libpod-conmon-759595342d1e8edd5b4b8c97715ae09a33510d9cf1188488b4607d5060ec0ecb.scope: Deactivated successfully.
Jan 20 19:03:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Jan 20 19:03:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:37 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 19:03:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e10 e10: 3 total, 0 up, 3 in
Jan 20 19:03:37 compute-0 ceph-osd[87071]: osd.1 0 done with init, starting boot process
Jan 20 19:03:37 compute-0 ceph-osd[87071]: osd.1 0 start_boot
Jan 20 19:03:37 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 19:03:37 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 19:03:37 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 19:03:37 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 19:03:37 compute-0 ceph-osd[87071]: osd.1 0  bench count 12288000 bsize 4 KiB
Jan 20 19:03:37 compute-0 ceph-mon[75120]: from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Jan 20 19:03:37 compute-0 ceph-mon[75120]: osdmap e9: 3 total, 0 up, 3 in
Jan 20 19:03:37 compute-0 ceph-mon[75120]: from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd={"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 19:03:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:37 compute-0 ceph-mon[75120]: pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:37 compute-0 podman[87656]: 2026-01-20 19:03:37.982137608 +0000 UTC m=+0.059350414 container create 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:37 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 0 up, 3 in
Jan 20 19:03:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:37 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:37 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:37 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:37 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:37 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:37 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:37 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 19:03:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:37 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:37 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:38 compute-0 podman[87656]: 2026-01-20 19:03:37.949152523 +0000 UTC m=+0.026365359 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:38 compute-0 systemd[1]: Started libpod-conmon-5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4.scope.
Jan 20 19:03:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:38 compute-0 podman[87656]: 2026-01-20 19:03:38.118442866 +0000 UTC m=+0.195655672 container init 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:38 compute-0 podman[87656]: 2026-01-20 19:03:38.12495577 +0000 UTC m=+0.202168576 container start 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:03:38 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 19:03:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:38 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:38 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:38 compute-0 podman[87656]: 2026-01-20 19:03:38.149806152 +0000 UTC m=+0.227018988 container attach 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:03:38 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test[87671]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Jan 20 19:03:38 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test[87671]:                             [--no-systemd] [--no-tmpfs]
Jan 20 19:03:38 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test[87671]: ceph-volume activate: error: unrecognized arguments: --bad-option
Jan 20 19:03:38 compute-0 systemd[1]: libpod-5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4.scope: Deactivated successfully.
Jan 20 19:03:38 compute-0 podman[87656]: 2026-01-20 19:03:38.32431425 +0000 UTC m=+0.401527036 container died 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-b5009cbeb134fd2e242626bb831af8b0313b2de087c5f5805396d6176d70247d-merged.mount: Deactivated successfully.
Jan 20 19:03:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:38 compute-0 ceph-mgr[75417]: [devicehealth WARNING root] not enough osds to create mgr pool
Jan 20 19:03:38 compute-0 podman[87656]: 2026-01-20 19:03:38.471153277 +0000 UTC m=+0.548366073 container remove 5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate-test, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:38 compute-0 systemd[1]: libpod-conmon-5b7ac9efce63ef0518828a62678535e7dfa6578de07fa16b4bd6add4c10d63b4.scope: Deactivated successfully.
Jan 20 19:03:38 compute-0 systemd[1]: Reloading.
Jan 20 19:03:38 compute-0 systemd-sysv-generator[87736]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:38 compute-0 systemd-rc-local-generator[87728]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:38 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 19:03:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:39 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:39 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:39 compute-0 ceph-mon[75120]: purged_snaps scrub starts
Jan 20 19:03:39 compute-0 ceph-mon[75120]: purged_snaps scrub ok
Jan 20 19:03:39 compute-0 ceph-mon[75120]: from='osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 19:03:39 compute-0 ceph-mon[75120]: osdmap e10: 3 total, 0 up, 3 in
Jan 20 19:03:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 10.972 iops: 2808.890 elapsed_sec: 1.068
Jan 20 19:03:39 compute-0 ceph-osd[86022]: log_channel(cluster) log [WRN] : OSD bench result of 2808.890266 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 0 waiting for initial osdmap
Jan 20 19:03:39 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0[86018]: 2026-01-20T19:03:39.094+0000 7f1d5e490640 -1 osd.0 0 waiting for initial osdmap
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 10 crush map has features 288514050185494528, adjusting msgr requires for clients
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 10 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 10 crush map has features 3314932999778484224, adjusting msgr requires for osds
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 10 check_osdmap_features require_osd_release unknown -> tentacle
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 19:03:39 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-0[86018]: 2026-01-20T19:03:39.159+0000 7f1d58a83640 -1 osd.0 10 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 19:03:39 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4109328083; not ready for session (expect reconnect)
Jan 20 19:03:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:39 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 10 set_numa_affinity not setting numa affinity
Jan 20 19:03:39 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Jan 20 19:03:39 compute-0 ceph-osd[86022]: osd.0 10 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Jan 20 19:03:39 compute-0 systemd[1]: Reloading.
Jan 20 19:03:39 compute-0 systemd-rc-local-generator[87769]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:03:39 compute-0 systemd-sysv-generator[87773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:03:39 compute-0 systemd[1]: Starting Ceph osd.2 for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:03:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:39 compute-0 podman[87834]: 2026-01-20 19:03:39.841013778 +0000 UTC m=+0.080830037 container create ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:39 compute-0 podman[87834]: 2026-01-20 19:03:39.812379865 +0000 UTC m=+0.052196154 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:39 compute-0 podman[87834]: 2026-01-20 19:03:39.930437928 +0000 UTC m=+0.170254207 container init ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:39 compute-0 podman[87834]: 2026-01-20 19:03:39.936220576 +0000 UTC m=+0.176036835 container start ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 19:03:39 compute-0 podman[87834]: 2026-01-20 19:03:39.963962357 +0000 UTC m=+0.203778616 container attach ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:03:39 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 19:03:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:39 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:39 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Jan 20 19:03:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Jan 20 19:03:40 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083] boot
Jan 20 19:03:40 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Jan 20 19:03:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Jan 20 19:03:40 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:40 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:40 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:40 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:40 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:40 compute-0 ceph-mon[75120]: OSD bench result of 2808.890266 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 19:03:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:40 compute-0 ceph-mon[75120]: pgmap v35: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Jan 20 19:03:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:40 compute-0 ceph-osd[86022]: osd.0 11 state: booting -> active
Jan 20 19:03:40 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:40 compute-0 bash[87834]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:40 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:40 compute-0 bash[87834]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:40 compute-0 ceph-mgr[75417]: [devicehealth INFO root] creating mgr pool
Jan 20 19:03:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Jan 20 19:03:40 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 20 19:03:40 compute-0 lvm[87934]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:03:40 compute-0 lvm[87934]: VG ceph_vg0 finished
Jan 20 19:03:40 compute-0 lvm[87937]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:03:40 compute-0 lvm[87937]: VG ceph_vg1 finished
Jan 20 19:03:40 compute-0 lvm[87939]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:03:40 compute-0 lvm[87939]: VG ceph_vg2 finished
Jan 20 19:03:40 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 19:03:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:40 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:40 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:41 compute-0 bash[87834]: --> Failed to activate via raw: did not find any matching OSD to activate
Jan 20 19:03:41 compute-0 bash[87834]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:41 compute-0 bash[87834]: Running command: /usr/bin/ceph-authtool --gen-print-key
Jan 20 19:03:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Jan 20 19:03:41 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Jan 20 19:03:41 compute-0 ceph-osd[86022]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 19:03:41 compute-0 ceph-osd[86022]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Jan 20 19:03:41 compute-0 ceph-osd[86022]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:41 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:41 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:41 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:41 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Jan 20 19:03:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 20 19:03:41 compute-0 ceph-mon[75120]: osd.0 [v2:192.168.122.100:6802/4109328083,v1:192.168.122.100:6803/4109328083] boot
Jan 20 19:03:41 compute-0 ceph-mon[75120]: osdmap e11: 3 total, 1 up, 3 in
Jan 20 19:03:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 0} : dispatch
Jan 20 19:03:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} : dispatch
Jan 20 19:03:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 19:03:41 compute-0 bash[87834]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 20 19:03:41 compute-0 bash[87834]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 bash[87834]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 bash[87834]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 20 19:03:41 compute-0 bash[87834]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 19:03:41 compute-0 bash[87834]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Jan 20 19:03:41 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate[87849]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 20 19:03:41 compute-0 bash[87834]: --> ceph-volume lvm activate successful for osd ID: 2
Jan 20 19:03:41 compute-0 systemd[1]: libpod-ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05.scope: Deactivated successfully.
Jan 20 19:03:41 compute-0 systemd[1]: libpod-ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05.scope: Consumed 1.924s CPU time.
Jan 20 19:03:41 compute-0 podman[87834]: 2026-01-20 19:03:41.302181083 +0000 UTC m=+1.541997362 container died ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-e012497fcc1bd994fe19065dcb808c43757b7004f29a95a2b719ce6d5a225bc5-merged.mount: Deactivated successfully.
Jan 20 19:03:41 compute-0 podman[87834]: 2026-01-20 19:03:41.40824502 +0000 UTC m=+1.648061279 container remove ef82902363af87844a43a9867939e42e2d9f20b593654356d1f595f67ce6aa05 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2-activate, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 20 19:03:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 19:03:41 compute-0 podman[88093]: 2026-01-20 19:03:41.701032184 +0000 UTC m=+0.069416304 container create d045a60defb83ca2430bb352b449b140006aab4f12b730bbce1d767b793cc797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 20 19:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f77ce245ecc3f3cdf3c64497903bedb83ea375b5e67c339f51c2a280f0dced5b/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:41 compute-0 podman[88093]: 2026-01-20 19:03:41.769097506 +0000 UTC m=+0.137481636 container init d045a60defb83ca2430bb352b449b140006aab4f12b730bbce1d767b793cc797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:41 compute-0 podman[88093]: 2026-01-20 19:03:41.677109874 +0000 UTC m=+0.045494004 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:41 compute-0 podman[88093]: 2026-01-20 19:03:41.775449597 +0000 UTC m=+0.143833697 container start d045a60defb83ca2430bb352b449b140006aab4f12b730bbce1d767b793cc797 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:03:41 compute-0 bash[88093]: d045a60defb83ca2430bb352b449b140006aab4f12b730bbce1d767b793cc797
Jan 20 19:03:41 compute-0 systemd[1]: Started Ceph osd.2 for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:03:41 compute-0 ceph-osd[88112]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:03:41 compute-0 ceph-osd[88112]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-osd, pid 2
Jan 20 19:03:41 compute-0 ceph-osd[88112]: pidfile_write: ignore empty --pid-file
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:41 compute-0 sudo[87128]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e400 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:41 compute-0 sudo[88128]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:41 compute-0 sudo[88128]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:41 compute-0 sudo[88128]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:41 compute-0 ceph-osd[88112]: bdev(0x5564ebe7e000 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:41 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 19:03:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:41 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:41 compute-0 ceph-osd[88112]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Jan 20 19:03:41 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:42 compute-0 sudo[88191]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yztaahtgpryyzbwjcwgiosgagoaqzfxc ; /usr/bin/python3'
Jan 20 19:03:42 compute-0 ceph-osd[88112]: load: jerasure load: lrc 
Jan 20 19:03:42 compute-0 sudo[88191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:42 compute-0 sudo[88179]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:03:42 compute-0 sudo[88179]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:42 compute-0 ceph-osd[88112]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Jan 20 19:03:42 compute-0 ceph-osd[88112]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Jan 20 19:03:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:42 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 20 19:03:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e13 e13: 3 total, 1 up, 3 in
Jan 20 19:03:42 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 1 up, 3 in
Jan 20 19:03:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:42 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:42 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:42 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:42 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Jan 20 19:03:42 compute-0 ceph-mon[75120]: osdmap e12: 3 total, 1 up, 3 in
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} : dispatch
Jan 20 19:03:42 compute-0 ceph-mon[75120]: pgmap v38: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Jan 20 19:03:42 compute-0 ceph-mon[75120]: osdmap e13: 3 total, 1 up, 3 in
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ebe7fc00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount shared_bdev_used = 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 19:03:42 compute-0 python3[88212]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: RocksDB version: 7.9.2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Git sha 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: DB SUMMARY
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: DB Session ID:  56IM7OZ4MESAT1MG9R0Y
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: CURRENT file:  CURRENT
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                         Options.error_if_exists: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.create_if_missing: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                                     Options.env: 0x5564ebd0fea0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                                Options.info_log: 0x5564ecda08a0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                              Options.statistics: (nil)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.use_fsync: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                              Options.db_log_dir: 
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.write_buffer_manager: 0x5564ebd74b40
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.unordered_write: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.row_cache: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                              Options.wal_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.two_write_queues: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.wal_compression: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.atomic_flush: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.max_background_jobs: 4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.max_background_compactions: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.max_subcompactions: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.max_open_files: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Compression algorithms supported:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kZSTD supported: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kXpressCompression supported: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kBZip2Compression supported: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kLZ4Compression supported: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kZlibCompression supported: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kSnappyCompression supported: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd138d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd138d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd138d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd138d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd138d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd138d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c60)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd138d0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ecda0c80)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e81d777e-bb5f-4cd7-b7f1-0f55caa3acea
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822247859, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822250067, "job": 1, "event": "recovery_finished"}
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: freelist init
Jan 20 19:03:42 compute-0 ceph-osd[88112]: freelist _read_cfg
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_fm effective freelist_type = bitmap, freelist_alloc_size = 0x1000, min_alloc_size = 0x1000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs umount
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) close
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bdev(0x5564ecb1f800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluefs mount shared_bdev_used = 27262976
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: RocksDB version: 7.9.2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Git sha 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Compile date 2025-10-30 15:42:43
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: DB SUMMARY
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: DB Session ID:  56IM7OZ4MESAT1MG9R0Z
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: CURRENT file:  CURRENT
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: IDENTITY file:  IDENTITY
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                         Options.error_if_exists: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.create_if_missing: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                         Options.paranoid_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.flush_verify_memtable_count: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                                     Options.env: 0x5564ebd0fd50
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                                      Options.fs: LegacyFileSystem
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                                Options.info_log: 0x5564ecda1b00
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_file_opening_threads: 16
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                              Options.statistics: (nil)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.use_fsync: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.max_log_file_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.log_file_time_to_roll: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.keep_log_file_num: 1000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.recycle_log_file_num: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                         Options.allow_fallocate: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.allow_mmap_reads: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.allow_mmap_writes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.use_direct_reads: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.create_missing_column_families: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                              Options.db_log_dir: 
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                                 Options.wal_dir: db.wal
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.table_cache_numshardbits: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                         Options.WAL_ttl_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.WAL_size_limit_MB: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.manifest_preallocation_size: 4194304
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                     Options.is_fd_close_on_exec: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.advise_random_on_open: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.db_write_buffer_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.write_buffer_manager: 0x5564ebd75900
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.access_hint_on_compaction_start: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                      Options.use_adaptive_mutex: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                            Options.rate_limiter: (nil)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.wal_recovery_mode: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.enable_thread_tracking: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.enable_pipelined_write: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.unordered_write: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.write_thread_max_yield_usec: 100
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.row_cache: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                              Options.wal_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.avoid_flush_during_recovery: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.allow_ingest_behind: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.two_write_queues: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.manual_wal_flush: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.wal_compression: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.atomic_flush: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.persist_stats_to_disk: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.write_dbid_to_manifest: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.log_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.best_efforts_recovery: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.allow_data_in_errors: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.db_host_id: __hostname__
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.enforce_single_del_contracts: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.max_background_jobs: 4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.max_background_compactions: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.max_subcompactions: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.writable_file_max_buffer_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.delayed_write_rate : 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.max_total_wal_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.stats_dump_period_sec: 600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.stats_persist_period_sec: 600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.max_open_files: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.bytes_per_sync: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                      Options.wal_bytes_per_sync: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.strict_bytes_per_sync: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.compaction_readahead_size: 2097152
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.max_background_flushes: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Compression algorithms supported:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kZSTD supported: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kXpressCompression supported: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kBZip2Compression supported: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kZSTDNotFinalCompression supported: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kLZ4Compression supported: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kZlibCompression supported: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kLZ4HCCompression supported: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         kSnappyCompression supported: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Fast CRC32 supported: Supported on x86
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: DMutex implementation: pthread_mutex_t
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06220)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd13a30
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 483183820
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd134b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd134b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 podman[88241]: 2026-01-20 19:03:42.324174098 +0000 UTC m=+0.071326481 container create 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:           Options.merge_operator: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.compaction_filter_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.sst_partitioner_factory: None
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.memtable_factory: SkipListFactory
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.table_factory: BlockBasedTable
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5564ece06300)
                                             cache_index_and_filter_blocks: 1
                                             cache_index_and_filter_blocks_with_high_priority: 0
                                             pin_l0_filter_and_index_blocks_in_cache: 0
                                             pin_top_level_index_and_filter: 1
                                             index_type: 0
                                             data_block_index_type: 0
                                             index_shortening: 1
                                             data_block_hash_table_util_ratio: 0.750000
                                             checksum: 4
                                             no_block_cache: 0
                                             block_cache: 0x5564ebd134b0
                                             block_cache_name: BinnedLRUCache
                                             block_cache_options:
                                               capacity : 536870912
                                               num_shard_bits : 4
                                               strict_capacity_limit : 0
                                               high_pri_pool_ratio: 0.000
                                             block_cache_compressed: (nil)
                                             persistent_cache: (nil)
                                             block_size: 4096
                                             block_size_deviation: 10
                                             block_restart_interval: 16
                                             index_block_restart_interval: 1
                                             metadata_block_size: 4096
                                             partition_filters: 0
                                             use_delta_encoding: 1
                                             filter_policy: bloomfilter
                                             whole_key_filtering: 1
                                             verify_compression: 0
                                             read_amp_bytes_per_bit: 0
                                             format_version: 5
                                             enable_index_compression: 1
                                             block_align: 0
                                             max_auto_readahead_size: 262144
                                             prepopulate_block_cache: 0
                                             initial_auto_readahead_size: 8192
                                             num_file_reads_for_auto_readahead: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.write_buffer_size: 16777216
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.max_write_buffer_number: 64
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.compression: LZ4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression: Disabled
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.num_levels: 7
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:            Options.compression_opts.window_bits: -14
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.level: 32767
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.compression_opts.strategy: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.parallel_threads: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                  Options.compression_opts.enabled: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:              Options.level0_stop_writes_trigger: 36
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.target_file_size_base: 67108864
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:             Options.target_file_size_multiplier: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.arena_block_size: 1048576
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.disable_auto_compactions: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.inplace_update_support: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                 Options.inplace_update_num_locks: 10000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:               Options.memtable_whole_key_filtering: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:   Options.memtable_huge_page_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.bloom_locality: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                    Options.max_successive_merges: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.optimize_filters_for_hits: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.paranoid_file_checks: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.force_consistency_checks: 1
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.report_bg_io_stats: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                               Options.ttl: 2592000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.periodic_compaction_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:    Options.preserve_internal_time_seconds: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                       Options.enable_blob_files: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                           Options.min_blob_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                          Options.blob_file_size: 268435456
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                   Options.blob_compression_type: NoCompression
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.enable_blob_garbage_collection: false
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:          Options.blob_compaction_readahead_size: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb:                Options.blob_file_starting_level: 0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/column_family.cc:635]         (skipping printing options)
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e81d777e-bb5f-4cd7-b7f1-0f55caa3acea
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822317758, "job": 1, "event": "recovery_started", "wal_files": [31]}
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822333724, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 131, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935822, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e81d777e-bb5f-4cd7-b7f1-0f55caa3acea", "db_session_id": "56IM7OZ4MESAT1MG9R0Z", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822349396, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935822, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e81d777e-bb5f-4cd7-b7f1-0f55caa3acea", "db_session_id": "56IM7OZ4MESAT1MG9R0Z", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822361122, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935822, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e81d777e-bb5f-4cd7-b7f1-0f55caa3acea", "db_session_id": "56IM7OZ4MESAT1MG9R0Z", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768935822370685, "job": 1, "event": "recovery_finished"}
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Jan 20 19:03:42 compute-0 systemd[1]: Started libpod-conmon-2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39.scope.
Jan 20 19:03:42 compute-0 podman[88241]: 2026-01-20 19:03:42.299335927 +0000 UTC m=+0.046488340 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.126 iops: 5920.277 elapsed_sec: 0.507
Jan 20 19:03:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [WRN] : OSD bench result of 5920.276596 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5564ecda3c00
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: DB pointer 0x5564ecf5a000
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Jan 20 19:03:42 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 0 waiting for initial osdmap
Jan 20 19:03:42 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1[87067]: 2026-01-20T19:03:42.414+0000 7f47b50b8640 -1 osd.1 0 waiting for initial osdmap
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:03:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s
                                           Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s
                                           Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 512.00 MB usage: 0.25 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.14 KB,2.68221e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 0.1 total, 0.1 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:03:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:42 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Jan 20 19:03:42 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/hello/cls_hello.cc:316: loading cls_hello
Jan 20 19:03:42 compute-0 ceph-osd[88112]: _get_class not permitted to load lua
Jan 20 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659d97a8da98fff18c34fb140f751fd41d8a9fbf5ef49b555008bfda0e05333/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659d97a8da98fff18c34fb140f751fd41d8a9fbf5ef49b555008bfda0e05333/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6659d97a8da98fff18c34fb140f751fd41d8a9fbf5ef49b555008bfda0e05333/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 13 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 13 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 13 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 13 check_osdmap_features require_osd_release unknown -> tentacle
Jan 20 19:03:42 compute-0 ceph-osd[88112]: _get_class not permitted to load sdk
Jan 20 19:03:42 compute-0 ceph-osd[88112]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Jan 20 19:03:42 compute-0 ceph-osd[88112]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Jan 20 19:03:42 compute-0 ceph-osd[88112]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Jan 20 19:03:42 compute-0 ceph-osd[88112]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Jan 20 19:03:42 compute-0 ceph-osd[88112]: osd.2 0 load_pgs
Jan 20 19:03:42 compute-0 ceph-osd[88112]: osd.2 0 load_pgs opened 0 pgs
Jan 20 19:03:42 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2[88108]: 2026-01-20T19:03:42.435+0000 7f48d83058c0 -1 osd.2 0 log_to_monitors true
Jan 20 19:03:42 compute-0 ceph-osd[88112]: osd.2 0 log_to_monitors true
Jan 20 19:03:42 compute-0 podman[88241]: 2026-01-20 19:03:42.447964147 +0000 UTC m=+0.195116550 container init 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 19:03:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Jan 20 19:03:42 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 20 19:03:42 compute-0 podman[88241]: 2026-01-20 19:03:42.457594746 +0000 UTC m=+0.204747129 container start 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:42 compute-0 podman[88241]: 2026-01-20 19:03:42.46533535 +0000 UTC m=+0.212487763 container attach 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 13 set_numa_affinity not setting numa affinity
Jan 20 19:03:42 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-1[87067]: 2026-01-20T19:03:42.465+0000 7f47afebd640 -1 osd.1 13 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 19:03:42 compute-0 ceph-osd[87071]: osd.1 13 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial no unique device path for loop4: no symlink to loop4 in /dev/disk/by-path
Jan 20 19:03:42 compute-0 podman[88639]: 2026-01-20 19:03:42.536562837 +0000 UTC m=+0.081992754 container create 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:42 compute-0 systemd[1]: Started libpod-conmon-10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56.scope.
Jan 20 19:03:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:42 compute-0 podman[88639]: 2026-01-20 19:03:42.517868982 +0000 UTC m=+0.063298909 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:42 compute-0 podman[88639]: 2026-01-20 19:03:42.611221465 +0000 UTC m=+0.156651422 container init 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:42 compute-0 podman[88639]: 2026-01-20 19:03:42.616579033 +0000 UTC m=+0.162008960 container start 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:03:42 compute-0 reverent_golick[88686]: 167 167
Jan 20 19:03:42 compute-0 podman[88639]: 2026-01-20 19:03:42.620168029 +0000 UTC m=+0.165597956 container attach 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:42 compute-0 systemd[1]: libpod-10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56.scope: Deactivated successfully.
Jan 20 19:03:42 compute-0 podman[88639]: 2026-01-20 19:03:42.632663037 +0000 UTC m=+0.178092964 container died 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 19:03:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-689326ad401e1c110c205c7454883a526610d83f02abda542c1c5a5b1153d62d-merged.mount: Deactivated successfully.
Jan 20 19:03:42 compute-0 podman[88639]: 2026-01-20 19:03:42.698815532 +0000 UTC m=+0.244245459 container remove 10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_golick, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True)
Jan 20 19:03:42 compute-0 systemd[1]: libpod-conmon-10f5bce85645d8be16b20cc9e90c09ebb24a61db555f4a1e3b02ed05c5ff3b56.scope: Deactivated successfully.
Jan 20 19:03:42 compute-0 podman[88728]: 2026-01-20 19:03:42.921100967 +0000 UTC m=+0.073765028 container create a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:42 compute-0 systemd[1]: Started libpod-conmon-a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15.scope.
Jan 20 19:03:42 compute-0 podman[88728]: 2026-01-20 19:03:42.895546168 +0000 UTC m=+0.048210299 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:43 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/3353689594; not ready for session (expect reconnect)
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:43 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Jan 20 19:03:43 compute-0 podman[88728]: 2026-01-20 19:03:43.018558878 +0000 UTC m=+0.171222959 container init a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 19:03:43 compute-0 podman[88728]: 2026-01-20 19:03:43.025302349 +0000 UTC m=+0.177966380 container start a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:43 compute-0 podman[88728]: 2026-01-20 19:03:43.028727471 +0000 UTC m=+0.181391532 container attach a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3531939254' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 19:03:43 compute-0 gracious_mcnulty[88629]: 
Jan 20 19:03:43 compute-0 gracious_mcnulty[88629]: {"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":95,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":13,"num_osds":3,"num_up_osds":1,"osd_up_since":1768935820,"num_in_osds":3,"osd_in_since":1768935800,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":447000576,"bytes_avail":21023641600,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"btime":"2026-01-20T19:02:04:930609+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":1,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T19:03:35.512911+0000","services":{}},"progress_events":{}}
Jan 20 19:03:43 compute-0 systemd[1]: libpod-2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39.scope: Deactivated successfully.
Jan 20 19:03:43 compute-0 podman[88241]: 2026-01-20 19:03:43.105743616 +0000 UTC m=+0.852896019 container died 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 19:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-6659d97a8da98fff18c34fb140f751fd41d8a9fbf5ef49b555008bfda0e05333-merged.mount: Deactivated successfully.
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Jan 20 19:03:43 compute-0 podman[88241]: 2026-01-20 19:03:43.157654982 +0000 UTC m=+0.904807365 container remove 2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39 (image=quay.io/ceph/ceph:v20, name=gracious_mcnulty, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:03:43 compute-0 ceph-mon[75120]: OSD bench result of 5920.276596 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 19:03:43 compute-0 ceph-mon[75120]: from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd={"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} : dispatch
Jan 20 19:03:43 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:43 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3531939254' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 19:03:43 compute-0 systemd[1]: libpod-conmon-2b02372cc5241e478d2dc6edb319062541f950d7f6daaae448f29248608d0b39.scope: Deactivated successfully.
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Jan 20 19:03:43 compute-0 ceph-osd[87071]: osd.1 14 state: booting -> active
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594] boot
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e14 create-or-move crush item name 'osd.2' initial_weight 0.02 at location {host=compute-0,root=default}
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:43 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:43 compute-0 sudo[88191]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:43 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.meyjbf(active, since 72s)
Jan 20 19:03:43 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Jan 20 19:03:43 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Jan 20 19:03:43 compute-0 sudo[88809]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjmmibbntyjkfknurdzzymzgfzpzhvdb ; /usr/bin/python3'
Jan 20 19:03:43 compute-0 sudo[88809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 19:03:43 compute-0 python3[88816]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:43 compute-0 podman[88851]: 2026-01-20 19:03:43.70709235 +0000 UTC m=+0.041024658 container create ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:43 compute-0 systemd[1]: Started libpod-conmon-ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881.scope.
Jan 20 19:03:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90455b59853867601d2d3652918426d3844c115a6e0d830d3d7b19ed19d5fdf2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90455b59853867601d2d3652918426d3844c115a6e0d830d3d7b19ed19d5fdf2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:43 compute-0 lvm[88881]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:03:43 compute-0 lvm[88881]: VG ceph_vg0 finished
Jan 20 19:03:43 compute-0 podman[88851]: 2026-01-20 19:03:43.688997708 +0000 UTC m=+0.022930036 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:43 compute-0 podman[88851]: 2026-01-20 19:03:43.78982714 +0000 UTC m=+0.123759458 container init ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:43 compute-0 podman[88851]: 2026-01-20 19:03:43.796556291 +0000 UTC m=+0.130488599 container start ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:03:43 compute-0 podman[88851]: 2026-01-20 19:03:43.800055994 +0000 UTC m=+0.133988302 container attach ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 20 19:03:43 compute-0 lvm[88883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:03:43 compute-0 lvm[88883]: VG ceph_vg1 finished
Jan 20 19:03:43 compute-0 lvm[88885]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:03:43 compute-0 lvm[88885]: VG ceph_vg2 finished
Jan 20 19:03:43 compute-0 serene_merkle[88744]: {}
Jan 20 19:03:43 compute-0 systemd[1]: libpod-a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15.scope: Deactivated successfully.
Jan 20 19:03:43 compute-0 podman[88728]: 2026-01-20 19:03:43.946297498 +0000 UTC m=+1.098961519 container died a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:43 compute-0 systemd[1]: libpod-a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15.scope: Consumed 1.471s CPU time.
Jan 20 19:03:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc8b4e161ae629dd346eadeb8e9a9000db94eb04192e85d0363d8a5c5e744c1b-merged.mount: Deactivated successfully.
Jan 20 19:03:43 compute-0 podman[88728]: 2026-01-20 19:03:43.997743483 +0000 UTC m=+1.150407504 container remove a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:44 compute-0 systemd[1]: libpod-conmon-a7b9354673116669a073c1824c7cc3f412bc61be983ccedf80ef35cff1921d15.scope: Deactivated successfully.
Jan 20 19:03:44 compute-0 sudo[88179]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:44 compute-0 sudo[88918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:03:44 compute-0 sudo[88918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:44 compute-0 sudo[88918]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Jan 20 19:03:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 19:03:44 compute-0 sudo[88943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Jan 20 19:03:44 compute-0 ceph-osd[88112]: osd.2 0 done with init, starting boot process
Jan 20 19:03:44 compute-0 ceph-osd[88112]: osd.2 0 start_boot
Jan 20 19:03:44 compute-0 ceph-osd[88112]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Jan 20 19:03:44 compute-0 ceph-osd[88112]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Jan 20 19:03:44 compute-0 ceph-osd[88112]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Jan 20 19:03:44 compute-0 ceph-osd[88112]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Jan 20 19:03:44 compute-0 ceph-osd[88112]: osd.2 0  bench count 12288000 bsize 4 KiB
Jan 20 19:03:44 compute-0 sudo[88943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Jan 20 19:03:44 compute-0 sudo[88943]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 15 pg[1.0( empty local-lis/les=14/15 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=14) [1] r=0 lpr=14 pi=[12,14)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:03:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:44 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:44 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:44 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:44 compute-0 ceph-mon[75120]: from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Jan 20 19:03:44 compute-0 ceph-mon[75120]: osd.1 [v2:192.168.122.100:6806/3353689594,v1:192.168.122.100:6807/3353689594] boot
Jan 20 19:03:44 compute-0 ceph-mon[75120]: osdmap e14: 3 total, 2 up, 3 in
Jan 20 19:03:44 compute-0 ceph-mon[75120]: from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd={"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} : dispatch
Jan 20 19:03:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 1} : dispatch
Jan 20 19:03:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:44 compute-0 ceph-mon[75120]: mgrmap e10: compute-0.meyjbf(active, since 72s)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: pgmap v41: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Jan 20 19:03:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:44 compute-0 sudo[88968]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:03:44 compute-0 sudo[88968]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 19:03:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1239761555' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:44 compute-0 podman[89040]: 2026-01-20 19:03:44.826601477 +0000 UTC m=+0.118112405 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 19:03:44 compute-0 podman[89040]: 2026-01-20 19:03:44.93379782 +0000 UTC m=+0.225308728 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:03:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Jan 20 19:03:45 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 19:03:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:45 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:45 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1239761555' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:45 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Jan 20 19:03:45 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Jan 20 19:03:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:45 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:45 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:45 compute-0 gifted_shockley[88877]: pool 'vms' created
Jan 20 19:03:45 compute-0 ceph-mon[75120]: from='osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Jan 20 19:03:45 compute-0 ceph-mon[75120]: osdmap e15: 3 total, 2 up, 3 in
Jan 20 19:03:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:45 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1239761555' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:45 compute-0 systemd[1]: libpod-ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881.scope: Deactivated successfully.
Jan 20 19:03:45 compute-0 podman[88851]: 2026-01-20 19:03:45.236419749 +0000 UTC m=+1.570352067 container died ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:03:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-90455b59853867601d2d3652918426d3844c115a6e0d830d3d7b19ed19d5fdf2-merged.mount: Deactivated successfully.
Jan 20 19:03:45 compute-0 podman[88851]: 2026-01-20 19:03:45.371491777 +0000 UTC m=+1.705424085 container remove ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881 (image=quay.io/ceph/ceph:v20, name=gifted_shockley, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:45 compute-0 systemd[1]: libpod-conmon-ad665e7306d1f2cc10b0a6cf8f7fafd2a474681345485c2f38b7495204529881.scope: Deactivated successfully.
Jan 20 19:03:45 compute-0 sudo[88809]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v44: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 19:03:45 compute-0 sudo[89224]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwmcxgnkfmufabvxwpuumsudwghvterj ; /usr/bin/python3'
Jan 20 19:03:45 compute-0 sudo[89224]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:45 compute-0 sudo[88968]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:45 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:45 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:45 compute-0 sudo[89227]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:45 compute-0 sudo[89227]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:45 compute-0 sudo[89227]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:45 compute-0 sudo[89252]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- inventory --format=json-pretty --filter-for-batch
Jan 20 19:03:45 compute-0 python3[89226]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:45 compute-0 sudo[89252]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:45 compute-0 podman[89276]: 2026-01-20 19:03:45.806711824 +0000 UTC m=+0.035700692 container create 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:03:45 compute-0 systemd[1]: Started libpod-conmon-7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd.scope.
Jan 20 19:03:45 compute-0 podman[89276]: 2026-01-20 19:03:45.791237625 +0000 UTC m=+0.020226523 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0009184716c148ae5c2ed4a1e345d0e9149e77eb5e1f77cfd0ca822b63c675/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b0009184716c148ae5c2ed4a1e345d0e9149e77eb5e1f77cfd0ca822b63c675/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:45 compute-0 podman[89276]: 2026-01-20 19:03:45.957751302 +0000 UTC m=+0.186740180 container init 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:46 compute-0 podman[89276]: 2026-01-20 19:03:46.006662486 +0000 UTC m=+0.235651374 container start 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 19:03:46 compute-0 podman[89276]: 2026-01-20 19:03:46.03955959 +0000 UTC m=+0.268548468 container attach 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:03:46 compute-0 podman[89308]: 2026-01-20 19:03:46.185396694 +0000 UTC m=+0.097534855 container create 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:46 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 19:03:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:46 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:46 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:46 compute-0 podman[89308]: 2026-01-20 19:03:46.140472884 +0000 UTC m=+0.052611065 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:46 compute-0 systemd[1]: Started libpod-conmon-864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998.scope.
Jan 20 19:03:46 compute-0 ceph-mon[75120]: purged_snaps scrub starts
Jan 20 19:03:46 compute-0 ceph-mon[75120]: purged_snaps scrub ok
Jan 20 19:03:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:46 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1239761555' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:46 compute-0 ceph-mon[75120]: osdmap e16: 3 total, 2 up, 3 in
Jan 20 19:03:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:46 compute-0 ceph-mon[75120]: pgmap v44: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 19:03:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:46 compute-0 podman[89308]: 2026-01-20 19:03:46.308114827 +0000 UTC m=+0.220252998 container init 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 19:03:46 compute-0 podman[89308]: 2026-01-20 19:03:46.320891672 +0000 UTC m=+0.233029833 container start 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:03:46 compute-0 hopeful_keldysh[89343]: 167 167
Jan 20 19:03:46 compute-0 systemd[1]: libpod-864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998.scope: Deactivated successfully.
Jan 20 19:03:46 compute-0 podman[89308]: 2026-01-20 19:03:46.340759915 +0000 UTC m=+0.252898106 container attach 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:03:46 compute-0 podman[89308]: 2026-01-20 19:03:46.341166294 +0000 UTC m=+0.253304455 container died 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-b6d4e4d5baee9bc5f88385937da4a12787c13c16670557bf9cbfdce58a789316-merged.mount: Deactivated successfully.
Jan 20 19:03:46 compute-0 podman[89308]: 2026-01-20 19:03:46.443481421 +0000 UTC m=+0.355619592 container remove 864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:03:46 compute-0 systemd[1]: libpod-conmon-864b03bb38561ef54252fbb9ba712373e6ade0d776b950eb4d7b1348c5765998.scope: Deactivated successfully.
Jan 20 19:03:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 19:03:46 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1587608720' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:46 compute-0 ceph-mgr[75417]: [devicehealth INFO root] creating main.db for devicehealth
Jan 20 19:03:46 compute-0 podman[89367]: 2026-01-20 19:03:46.634953222 +0000 UTC m=+0.081028601 container create 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:46 compute-0 podman[89367]: 2026-01-20 19:03:46.602429678 +0000 UTC m=+0.048505057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:46 compute-0 systemd[1]: Started libpod-conmon-5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2.scope.
Jan 20 19:03:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:46 compute-0 podman[89367]: 2026-01-20 19:03:46.766925156 +0000 UTC m=+0.213000535 container init 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:03:46 compute-0 ceph-mgr[75417]: [devicehealth INFO root] Check health
Jan 20 19:03:46 compute-0 podman[89367]: 2026-01-20 19:03:46.776269479 +0000 UTC m=+0.222344858 container start 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:46 compute-0 ceph-mgr[75417]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Jan 20 19:03:46 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 20 19:03:46 compute-0 podman[89367]: 2026-01-20 19:03:46.804157593 +0000 UTC m=+0.250232992 container attach 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:46 compute-0 sudo[89403]:     ceph : PWD=/ ; USER=root ; COMMAND=/usr/sbin/smartctl -x --json=o /dev/vda
Jan 20 19:03:46 compute-0 sudo[89403]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 20 19:03:46 compute-0 sudo[89403]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=167)
Jan 20 19:03:46 compute-0 sudo[89403]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:46 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 20 19:03:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Jan 20 19:03:46 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:47 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Jan 20 19:03:47 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1587608720' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Jan 20 19:03:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mon metadata", "id": "compute-0"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:47 compute-0 zen_swirles[89387]: [
Jan 20 19:03:47 compute-0 zen_swirles[89387]:     {
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "available": false,
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "being_replaced": false,
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "ceph_device_lvm": false,
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "device_id": "QEMU_DVD-ROM_QM00001",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "lsm_data": {},
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "lvs": [],
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "path": "/dev/sr0",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "rejected_reasons": [
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "Has a FileSystem",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "Insufficient space (<5GB)"
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         ],
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         "sys_api": {
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "actuators": null,
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "device_nodes": [
Jan 20 19:03:47 compute-0 zen_swirles[89387]:                 "sr0"
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             ],
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "devname": "sr0",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "human_readable_size": "482.00 KB",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "id_bus": "ata",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "model": "QEMU DVD-ROM",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "nr_requests": "2",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "parent": "/dev/sr0",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "partitions": {},
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "path": "/dev/sr0",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "removable": "1",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "rev": "2.5+",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "ro": "0",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "rotational": "1",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "sas_address": "",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "sas_device_handle": "",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "scheduler_mode": "mq-deadline",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "sectors": 0,
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "sectorsize": "2048",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "size": 493568.0,
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "support_discard": "2048",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "type": "disk",
Jan 20 19:03:47 compute-0 zen_swirles[89387]:             "vendor": "QEMU"
Jan 20 19:03:47 compute-0 zen_swirles[89387]:         }
Jan 20 19:03:47 compute-0 zen_swirles[89387]:     }
Jan 20 19:03:47 compute-0 zen_swirles[89387]: ]
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1587608720' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:47 compute-0 systemd[1]: libpod-5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2.scope: Deactivated successfully.
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Jan 20 19:03:47 compute-0 podman[89367]: 2026-01-20 19:03:47.363735863 +0000 UTC m=+0.809811272 container died 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:03:47 compute-0 determined_keller[89291]: pool 'volumes' created
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.meyjbf(active, since 76s)
Jan 20 19:03:47 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:47 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:47 compute-0 systemd[1]: libpod-7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd.scope: Deactivated successfully.
Jan 20 19:03:47 compute-0 podman[89276]: 2026-01-20 19:03:47.393340148 +0000 UTC m=+1.622329046 container died 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 19:03:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-4ab4ccf98973c4774ae00f7291be8934a4c83831fe3051b05b4aa257431f902b-merged.mount: Deactivated successfully.
Jan 20 19:03:47 compute-0 podman[89367]: 2026-01-20 19:03:47.457898795 +0000 UTC m=+0.903974174 container remove 5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:03:47 compute-0 systemd[1]: libpod-conmon-5d8a6811faaf9bfcab76f587ee409920db351de2e7e88d128d3090ba95fa94b2.scope: Deactivated successfully.
Jan 20 19:03:47 compute-0 sudo[89252]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v46: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-3b0009184716c148ae5c2ed4a1e345d0e9149e77eb5e1f77cfd0ca822b63c675-merged.mount: Deactivated successfully.
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mgr[75417]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43688k
Jan 20 19:03:47 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43688k
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Jan 20 19:03:47 compute-0 ceph-mgr[75417]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 20 19:03:47 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:03:47 compute-0 podman[89276]: 2026-01-20 19:03:47.606182308 +0000 UTC m=+1.835171186 container remove 7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd (image=quay.io/ceph/ceph:v20, name=determined_keller, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:03:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:03:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:47 compute-0 sudo[89224]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:47 compute-0 systemd[1]: libpod-conmon-7a2f4951f9ee0163610b5d89bd3dab056484d84ef238e19e74ca65bb6d417ebd.scope: Deactivated successfully.
Jan 20 19:03:47 compute-0 sudo[90087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:47 compute-0 sudo[90087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:47 compute-0 sudo[90087]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:47 compute-0 sudo[90158]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egfulgzstucglzsdfnpqfnookdpbhuqb ; /usr/bin/python3'
Jan 20 19:03:47 compute-0 sudo[90113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:03:47 compute-0 sudo[90158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:47 compute-0 sudo[90113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:47 compute-0 python3[90161]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:47 compute-0 podman[90163]: 2026-01-20 19:03:47.974486771 +0000 UTC m=+0.067448148 container create 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:48 compute-0 systemd[1]: Started libpod-conmon-1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc.scope.
Jan 20 19:03:48 compute-0 podman[90163]: 2026-01-20 19:03:47.957430535 +0000 UTC m=+0.050391942 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41669442d668b330d4fbc8924bc8305ea8441bbe3994169f58273133c59939f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/41669442d668b330d4fbc8924bc8305ea8441bbe3994169f58273133c59939f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 podman[90163]: 2026-01-20 19:03:48.077451753 +0000 UTC m=+0.170413140 container init 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 19:03:48 compute-0 podman[90163]: 2026-01-20 19:03:48.090146115 +0000 UTC m=+0.183107502 container start 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 19:03:48 compute-0 podman[90163]: 2026-01-20 19:03:48.094084179 +0000 UTC m=+0.187045596 container attach 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:48 compute-0 podman[90192]: 2026-01-20 19:03:48.128585392 +0000 UTC m=+0.080641963 container create 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:48 compute-0 systemd[1]: Started libpod-conmon-8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0.scope.
Jan 20 19:03:48 compute-0 podman[90192]: 2026-01-20 19:03:48.098208028 +0000 UTC m=+0.050264709 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:48 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 19:03:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:48 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:48 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:48 compute-0 podman[90192]: 2026-01-20 19:03:48.208580687 +0000 UTC m=+0.160637258 container init 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 19:03:48 compute-0 podman[90192]: 2026-01-20 19:03:48.215188025 +0000 UTC m=+0.167244596 container start 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 19:03:48 compute-0 quizzical_roentgen[90210]: 167 167
Jan 20 19:03:48 compute-0 systemd[1]: libpod-8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0.scope: Deactivated successfully.
Jan 20 19:03:48 compute-0 podman[90192]: 2026-01-20 19:03:48.22633653 +0000 UTC m=+0.178393101 container attach 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:03:48 compute-0 podman[90192]: 2026-01-20 19:03:48.22674888 +0000 UTC m=+0.178805471 container died 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:03:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a0c478b5f542f5b68cb4021cbc11c7f9f630c8afa58c8ab7adc98960103d537-merged.mount: Deactivated successfully.
Jan 20 19:03:48 compute-0 podman[90192]: 2026-01-20 19:03:48.281715339 +0000 UTC m=+0.233771910 container remove 8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:48 compute-0 systemd[1]: libpod-conmon-8a70eeb949229cc8f7ef334baade7ca120a308cb4839cd912c09fdc7d02e12a0.scope: Deactivated successfully.
Jan 20 19:03:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Jan 20 19:03:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Jan 20 19:03:48 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Jan 20 19:03:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:48 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:48 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1587608720' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:48 compute-0 ceph-mon[75120]: osdmap e17: 3 total, 2 up, 3 in
Jan 20 19:03:48 compute-0 ceph-mon[75120]: mgrmap e11: compute-0.meyjbf(active, since 76s)
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: pgmap v46: 3 pgs: 2 unknown, 1 creating+peering; 0 B data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: Adjusting osd_memory_target on compute-0 to 43688k
Jan 20 19:03:48 compute-0 ceph-mon[75120]: Unable to set osd_memory_target on compute-0 to 44737331: error parsing value: Value '44737331' is below minimum 939524096
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:03:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:03:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e18 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:48 compute-0 podman[90250]: 2026-01-20 19:03:48.499302182 +0000 UTC m=+0.085677761 container create 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:03:48 compute-0 podman[90250]: 2026-01-20 19:03:48.45426319 +0000 UTC m=+0.040638839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:48 compute-0 systemd[1]: Started libpod-conmon-66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a.scope.
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 21.849 iops: 5593.326 elapsed_sec: 0.536
Jan 20 19:03:48 compute-0 ceph-osd[88112]: log_channel(cluster) log [WRN] : OSD bench result of 5593.325970 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 19:03:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 0 waiting for initial osdmap
Jan 20 19:03:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 19:03:48 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2420707572' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:48 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2[88108]: 2026-01-20T19:03:48.601+0000 7f48d4287640 -1 osd.2 0 waiting for initial osdmap
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 18 crush map has features 288514051259236352, adjusting msgr requires for clients
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 18 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 18 crush map has features 3314933000852226048, adjusting msgr requires for osds
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 18 check_osdmap_features require_osd_release unknown -> tentacle
Jan 20 19:03:48 compute-0 podman[90250]: 2026-01-20 19:03:48.6339798 +0000 UTC m=+0.220355409 container init 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:48 compute-0 podman[90250]: 2026-01-20 19:03:48.646516239 +0000 UTC m=+0.232891818 container start 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:48 compute-0 podman[90250]: 2026-01-20 19:03:48.650453093 +0000 UTC m=+0.236828682 container attach 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 18 set_numa_affinity not setting numa affinity
Jan 20 19:03:48 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-osd-2[88108]: 2026-01-20T19:03:48.650+0000 7f48cf08c640 -1 osd.2 18 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Jan 20 19:03:48 compute-0 ceph-osd[88112]: osd.2 18 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial no unique device path for loop5: no symlink to loop5 in /dev/disk/by-path
Jan 20 19:03:49 compute-0 pedantic_ishizaka[90267]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:03:49 compute-0 pedantic_ishizaka[90267]: --> All data devices are unavailable
Jan 20 19:03:49 compute-0 ceph-mgr[75417]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/1748615462; not ready for session (expect reconnect)
Jan 20 19:03:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:49 compute-0 ceph-mgr[75417]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Jan 20 19:03:49 compute-0 systemd[1]: libpod-66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 podman[90250]: 2026-01-20 19:03:49.216399674 +0000 UTC m=+0.802775263 container died 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-410bb633f7f83eb942a23891d8b36d0f1a98d4f99b9c47013a5911397c5ed422-merged.mount: Deactivated successfully.
Jan 20 19:03:49 compute-0 podman[90250]: 2026-01-20 19:03:49.291199455 +0000 UTC m=+0.877575034 container remove 66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_ishizaka, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:03:49 compute-0 systemd[1]: libpod-conmon-66485fcb8941ad4e4cda9a989d99c6a71fdad846c79d388bf90b6df9eb96975a.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 sudo[90113]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Jan 20 19:03:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2420707572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Jan 20 19:03:49 compute-0 recursing_banach[90190]: pool 'backups' created
Jan 20 19:03:49 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462] boot
Jan 20 19:03:49 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Jan 20 19:03:49 compute-0 sudo[90302]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:49 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:03:49 compute-0 ceph-osd[88112]: osd.2 19 state: booting -> active
Jan 20 19:03:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Jan 20 19:03:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:49 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 19 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[16,19)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:03:49 compute-0 ceph-mon[75120]: osdmap e18: 3 total, 2 up, 3 in
Jan 20 19:03:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:49 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2420707572' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:49 compute-0 sudo[90302]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:49 compute-0 sudo[90302]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:49 compute-0 systemd[1]: libpod-1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 podman[90163]: 2026-01-20 19:03:49.440580864 +0000 UTC m=+1.533542241 container died 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-41669442d668b330d4fbc8924bc8305ea8441bbe3994169f58273133c59939f2-merged.mount: Deactivated successfully.
Jan 20 19:03:49 compute-0 podman[90163]: 2026-01-20 19:03:49.485049204 +0000 UTC m=+1.578010591 container remove 1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc (image=quay.io/ceph/ceph:v20, name=recursing_banach, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:49 compute-0 sudo[90328]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:03:49 compute-0 sudo[90328]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:49 compute-0 systemd[1]: libpod-conmon-1435086fc69204ba4416c0191f21e95e013f89bf5085fe01dee11dbbab58fdcc.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v49: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 19:03:49 compute-0 sudo[90158]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:49 compute-0 sudo[90387]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pedypwruztydkfteaqilxtlcqozpwclp ; /usr/bin/python3'
Jan 20 19:03:49 compute-0 sudo[90387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:49 compute-0 podman[90402]: 2026-01-20 19:03:49.807489134 +0000 UTC m=+0.059910528 container create a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:49 compute-0 python3[90389]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:49 compute-0 systemd[1]: Started libpod-conmon-a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40.scope.
Jan 20 19:03:49 compute-0 podman[90402]: 2026-01-20 19:03:49.775955413 +0000 UTC m=+0.028376907 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:49 compute-0 podman[90402]: 2026-01-20 19:03:49.881492707 +0000 UTC m=+0.133914121 container init a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:03:49 compute-0 podman[90402]: 2026-01-20 19:03:49.89001534 +0000 UTC m=+0.142436724 container start a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:03:49 compute-0 podman[90402]: 2026-01-20 19:03:49.893685048 +0000 UTC m=+0.146106432 container attach a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:49 compute-0 gifted_dijkstra[90425]: 167 167
Jan 20 19:03:49 compute-0 systemd[1]: libpod-a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 conmon[90425]: conmon a4d88a2ae5164763967a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40.scope/container/memory.events
Jan 20 19:03:49 compute-0 podman[90402]: 2026-01-20 19:03:49.896121595 +0000 UTC m=+0.148542979 container died a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b65f4e05a1993f24000be98da48f480654b5d3397fad67454b1a61107763cbcb-merged.mount: Deactivated successfully.
Jan 20 19:03:49 compute-0 podman[90416]: 2026-01-20 19:03:49.935944344 +0000 UTC m=+0.091023699 container create 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:49 compute-0 podman[90402]: 2026-01-20 19:03:49.945831939 +0000 UTC m=+0.198253323 container remove a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gifted_dijkstra, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:49 compute-0 systemd[1]: libpod-conmon-a4d88a2ae5164763967a9299f2dc9d4331e743a90ab3306f4e15e6b0320fdb40.scope: Deactivated successfully.
Jan 20 19:03:49 compute-0 systemd[1]: Started libpod-conmon-8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc.scope.
Jan 20 19:03:49 compute-0 podman[90416]: 2026-01-20 19:03:49.886962657 +0000 UTC m=+0.042042092 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d52c71b5c3da83d91a8d76d0cece9d2180eef66f50b20d06a67add422857ed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47d52c71b5c3da83d91a8d76d0cece9d2180eef66f50b20d06a67add422857ed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 podman[90416]: 2026-01-20 19:03:50.035983097 +0000 UTC m=+0.191062672 container init 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:03:50 compute-0 podman[90416]: 2026-01-20 19:03:50.044500839 +0000 UTC m=+0.199580184 container start 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:50 compute-0 podman[90416]: 2026-01-20 19:03:50.048037764 +0000 UTC m=+0.203117149 container attach 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:50 compute-0 podman[90465]: 2026-01-20 19:03:50.092605166 +0000 UTC m=+0.041147352 container create b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:50 compute-0 systemd[1]: Started libpod-conmon-b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27.scope.
Jan 20 19:03:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:50 compute-0 podman[90465]: 2026-01-20 19:03:50.165725697 +0000 UTC m=+0.114267893 container init b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:03:50 compute-0 podman[90465]: 2026-01-20 19:03:50.073864949 +0000 UTC m=+0.022407135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:50 compute-0 podman[90465]: 2026-01-20 19:03:50.175331636 +0000 UTC m=+0.123873812 container start b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:50 compute-0 podman[90465]: 2026-01-20 19:03:50.179293381 +0000 UTC m=+0.127835587 container attach b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:50 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Jan 20 19:03:50 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Jan 20 19:03:50 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Jan 20 19:03:50 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:03:50 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 20 pg[2.0( empty local-lis/les=19/20 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=19) [2] r=0 lpr=19 pi=[16,19)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:03:50 compute-0 ceph-mon[75120]: OSD bench result of 5593.325970 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Jan 20 19:03:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2420707572' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:50 compute-0 ceph-mon[75120]: osd.2 [v2:192.168.122.100:6810/1748615462,v1:192.168.122.100:6811/1748615462] boot
Jan 20 19:03:50 compute-0 ceph-mon[75120]: osdmap e19: 3 total, 3 up, 3 in
Jan 20 19:03:50 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd metadata", "id": 2} : dispatch
Jan 20 19:03:50 compute-0 ceph-mon[75120]: pgmap v49: 4 pgs: 2 active+clean, 2 unknown; 449 KiB data, 453 MiB used, 40 GiB / 40 GiB avail
Jan 20 19:03:50 compute-0 ceph-mon[75120]: osdmap e20: 3 total, 3 up, 3 in
Jan 20 19:03:50 compute-0 keen_curran[90482]: {
Jan 20 19:03:50 compute-0 keen_curran[90482]:     "0": [
Jan 20 19:03:50 compute-0 keen_curran[90482]:         {
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "devices": [
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "/dev/loop3"
Jan 20 19:03:50 compute-0 keen_curran[90482]:             ],
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_name": "ceph_lv0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_size": "21470642176",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "name": "ceph_lv0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "tags": {
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cluster_name": "ceph",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.crush_device_class": "",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.encrypted": "0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.objectstore": "bluestore",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osd_id": "0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.type": "block",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.vdo": "0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.with_tpm": "0"
Jan 20 19:03:50 compute-0 keen_curran[90482]:             },
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "type": "block",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "vg_name": "ceph_vg0"
Jan 20 19:03:50 compute-0 keen_curran[90482]:         }
Jan 20 19:03:50 compute-0 keen_curran[90482]:     ],
Jan 20 19:03:50 compute-0 keen_curran[90482]:     "1": [
Jan 20 19:03:50 compute-0 keen_curran[90482]:         {
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "devices": [
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "/dev/loop4"
Jan 20 19:03:50 compute-0 keen_curran[90482]:             ],
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_name": "ceph_lv1",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_size": "21470642176",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "name": "ceph_lv1",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "tags": {
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cluster_name": "ceph",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.crush_device_class": "",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.encrypted": "0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.objectstore": "bluestore",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osd_id": "1",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.type": "block",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.vdo": "0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.with_tpm": "0"
Jan 20 19:03:50 compute-0 keen_curran[90482]:             },
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "type": "block",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "vg_name": "ceph_vg1"
Jan 20 19:03:50 compute-0 keen_curran[90482]:         }
Jan 20 19:03:50 compute-0 keen_curran[90482]:     ],
Jan 20 19:03:50 compute-0 keen_curran[90482]:     "2": [
Jan 20 19:03:50 compute-0 keen_curran[90482]:         {
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "devices": [
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "/dev/loop5"
Jan 20 19:03:50 compute-0 keen_curran[90482]:             ],
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_name": "ceph_lv2",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_size": "21470642176",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "name": "ceph_lv2",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "tags": {
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.cluster_name": "ceph",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.crush_device_class": "",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.encrypted": "0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.objectstore": "bluestore",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osd_id": "2",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.type": "block",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.vdo": "0",
Jan 20 19:03:50 compute-0 keen_curran[90482]:                 "ceph.with_tpm": "0"
Jan 20 19:03:50 compute-0 keen_curran[90482]:             },
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "type": "block",
Jan 20 19:03:50 compute-0 keen_curran[90482]:             "vg_name": "ceph_vg2"
Jan 20 19:03:50 compute-0 keen_curran[90482]:         }
Jan 20 19:03:50 compute-0 keen_curran[90482]:     ]
Jan 20 19:03:50 compute-0 keen_curran[90482]: }
Jan 20 19:03:50 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 19:03:50 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/324467649' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:50 compute-0 systemd[1]: libpod-b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27.scope: Deactivated successfully.
Jan 20 19:03:50 compute-0 podman[90465]: 2026-01-20 19:03:50.496773603 +0000 UTC m=+0.445315779 container died b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:03:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-9514beaf9f70dc386aba57fda56c0a59ecd8c465d39b04baf2c87f24c6499613-merged.mount: Deactivated successfully.
Jan 20 19:03:50 compute-0 podman[90465]: 2026-01-20 19:03:50.543345323 +0000 UTC m=+0.491887499 container remove b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=keen_curran, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:50 compute-0 systemd[1]: libpod-conmon-b5f8ac62bcde1d020fabe4a8ce2bf3c7ea43f2b1ed713814ca190cfae3207f27.scope: Deactivated successfully.
Jan 20 19:03:50 compute-0 sudo[90328]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:50 compute-0 sudo[90524]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:03:50 compute-0 sudo[90524]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:50 compute-0 sudo[90524]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:50 compute-0 sudo[90549]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:03:50 compute-0 sudo[90549]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:50 compute-0 podman[90587]: 2026-01-20 19:03:50.960824527 +0000 UTC m=+0.037402002 container create 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:03:50 compute-0 systemd[1]: Started libpod-conmon-9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da.scope.
Jan 20 19:03:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:51 compute-0 podman[90587]: 2026-01-20 19:03:51.030467216 +0000 UTC m=+0.107044711 container init 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:03:51 compute-0 podman[90587]: 2026-01-20 19:03:51.036350436 +0000 UTC m=+0.112927911 container start 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:51 compute-0 podman[90587]: 2026-01-20 19:03:51.039983312 +0000 UTC m=+0.116560787 container attach 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:03:51 compute-0 podman[90587]: 2026-01-20 19:03:50.943590476 +0000 UTC m=+0.020167971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:51 compute-0 festive_hellman[90603]: 167 167
Jan 20 19:03:51 compute-0 systemd[1]: libpod-9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da.scope: Deactivated successfully.
Jan 20 19:03:51 compute-0 podman[90587]: 2026-01-20 19:03:51.042082903 +0000 UTC m=+0.118660398 container died 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 19:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-59d291de53a878af06775c535a2c32f9b940f522d27670283ea8aacb8b08a96f-merged.mount: Deactivated successfully.
Jan 20 19:03:51 compute-0 podman[90587]: 2026-01-20 19:03:51.078766836 +0000 UTC m=+0.155344311 container remove 9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_hellman, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:51 compute-0 systemd[1]: libpod-conmon-9fee0d65bcb332e28e88a20689b932d7af3f43b599f2a442b80171dbd69056da.scope: Deactivated successfully.
Jan 20 19:03:51 compute-0 podman[90628]: 2026-01-20 19:03:51.232297484 +0000 UTC m=+0.044774908 container create 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:03:51 compute-0 systemd[1]: Started libpod-conmon-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope.
Jan 20 19:03:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 podman[90628]: 2026-01-20 19:03:51.216077847 +0000 UTC m=+0.028555301 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 podman[90628]: 2026-01-20 19:03:51.322153544 +0000 UTC m=+0.134630988 container init 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:03:51 compute-0 podman[90628]: 2026-01-20 19:03:51.328510705 +0000 UTC m=+0.140988129 container start 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:51 compute-0 podman[90628]: 2026-01-20 19:03:51.332177083 +0000 UTC m=+0.144654507 container attach 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Jan 20 19:03:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/324467649' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Jan 20 19:03:51 compute-0 determined_cori[90456]: pool 'images' created
Jan 20 19:03:51 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Jan 20 19:03:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:03:51 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/324467649' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:51 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/324467649' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:51 compute-0 ceph-mon[75120]: osdmap e21: 3 total, 3 up, 3 in
Jan 20 19:03:51 compute-0 podman[90416]: 2026-01-20 19:03:51.437693986 +0000 UTC m=+1.592773341 container died 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 19:03:51 compute-0 systemd[1]: libpod-8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc.scope: Deactivated successfully.
Jan 20 19:03:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-47d52c71b5c3da83d91a8d76d0cece9d2180eef66f50b20d06a67add422857ed-merged.mount: Deactivated successfully.
Jan 20 19:03:51 compute-0 podman[90416]: 2026-01-20 19:03:51.47939808 +0000 UTC m=+1.634477445 container remove 8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc (image=quay.io/ceph/ceph:v20, name=determined_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 19:03:51 compute-0 systemd[1]: libpod-conmon-8e090e0fa250dccf455f792aee0ba8325f1f6eec5ba7b95133c6fdfebfa58ffc.scope: Deactivated successfully.
Jan 20 19:03:51 compute-0 sudo[90387]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v52: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:51 compute-0 sudo[90695]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfotenftqhyidbqoilawvkvfcpgzkcsb ; /usr/bin/python3'
Jan 20 19:03:51 compute-0 sudo[90695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:51 compute-0 python3[90698]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:51 compute-0 podman[90726]: 2026-01-20 19:03:51.853785168 +0000 UTC m=+0.049953051 container create 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:51 compute-0 systemd[1]: Started libpod-conmon-2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365.scope.
Jan 20 19:03:51 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c71e51d6e16ffbb74d61b8f3720d0508df6b30cabaf2226452f918153006c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/51c71e51d6e16ffbb74d61b8f3720d0508df6b30cabaf2226452f918153006c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:51 compute-0 podman[90726]: 2026-01-20 19:03:51.833809181 +0000 UTC m=+0.029977084 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:51 compute-0 podman[90726]: 2026-01-20 19:03:51.933575529 +0000 UTC m=+0.129743442 container init 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:03:51 compute-0 podman[90726]: 2026-01-20 19:03:51.93997136 +0000 UTC m=+0.136139243 container start 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:51 compute-0 podman[90726]: 2026-01-20 19:03:51.943899294 +0000 UTC m=+0.140067177 container attach 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle)
Jan 20 19:03:52 compute-0 lvm[90778]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:03:52 compute-0 lvm[90775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:03:52 compute-0 lvm[90778]: VG ceph_vg1 finished
Jan 20 19:03:52 compute-0 lvm[90775]: VG ceph_vg0 finished
Jan 20 19:03:52 compute-0 lvm[90782]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:03:52 compute-0 lvm[90782]: VG ceph_vg2 finished
Jan 20 19:03:52 compute-0 distracted_dirac[90644]: {}
Jan 20 19:03:52 compute-0 systemd[1]: libpod-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope: Deactivated successfully.
Jan 20 19:03:52 compute-0 systemd[1]: libpod-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope: Consumed 1.368s CPU time.
Jan 20 19:03:52 compute-0 conmon[90644]: conmon 7968c5f89e5635d6ca58 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope/container/memory.events
Jan 20 19:03:52 compute-0 podman[90628]: 2026-01-20 19:03:52.187528618 +0000 UTC m=+1.000006042 container died 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-316cefa8c100b0c8be9ecc29854992fc3ed9c2e574fe82007bcf3b719deb7823-merged.mount: Deactivated successfully.
Jan 20 19:03:52 compute-0 podman[90628]: 2026-01-20 19:03:52.240490249 +0000 UTC m=+1.052967673 container remove 7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 20 19:03:52 compute-0 systemd[1]: libpod-conmon-7968c5f89e5635d6ca58b2f704e380d1b0a67457b8a3b91e5ea96e408599095c.scope: Deactivated successfully.
Jan 20 19:03:52 compute-0 sudo[90549]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:03:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:03:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:52 compute-0 sudo[90814]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:03:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 19:03:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314466985' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:52 compute-0 sudo[90814]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:03:52 compute-0 sudo[90814]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Jan 20 19:03:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1314466985' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Jan 20 19:03:52 compute-0 awesome_cannon[90762]: pool 'cephfs.cephfs.meta' created
Jan 20 19:03:52 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Jan 20 19:03:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:03:52 compute-0 systemd[1]: libpod-2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365.scope: Deactivated successfully.
Jan 20 19:03:52 compute-0 podman[90726]: 2026-01-20 19:03:52.446100617 +0000 UTC m=+0.642268500 container died 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:52 compute-0 ceph-mon[75120]: pgmap v52: 5 pgs: 1 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:03:52 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1314466985' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:52 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1314466985' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:52 compute-0 ceph-mon[75120]: osdmap e22: 3 total, 3 up, 3 in
Jan 20 19:03:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-51c71e51d6e16ffbb74d61b8f3720d0508df6b30cabaf2226452f918153006c2-merged.mount: Deactivated successfully.
Jan 20 19:03:52 compute-0 podman[90726]: 2026-01-20 19:03:52.482813012 +0000 UTC m=+0.678980895 container remove 2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365 (image=quay.io/ceph/ceph:v20, name=awesome_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 19:03:52 compute-0 systemd[1]: libpod-conmon-2bccd579676eb617fdf17dca67c92d873a9cbbf9e460258638d6a0e1b146b365.scope: Deactivated successfully.
Jan 20 19:03:52 compute-0 sudo[90695]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:52 compute-0 sudo[90879]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fefbypfajaylhtdhqveanvqmfhjkezlz ; /usr/bin/python3'
Jan 20 19:03:52 compute-0 sudo[90879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:52 compute-0 python3[90881]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 22 pg[6.0( empty local-lis/les=0/0 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:03:52 compute-0 podman[90882]: 2026-01-20 19:03:52.853168153 +0000 UTC m=+0.057508341 container create 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:52 compute-0 systemd[1]: Started libpod-conmon-74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c.scope.
Jan 20 19:03:52 compute-0 podman[90882]: 2026-01-20 19:03:52.823550228 +0000 UTC m=+0.027890506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fe11634e6aeb11000cf5b7d296a56cce13c5c4a678725cf2a73841d8e7b20b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07fe11634e6aeb11000cf5b7d296a56cce13c5c4a678725cf2a73841d8e7b20b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:52 compute-0 podman[90882]: 2026-01-20 19:03:52.942793478 +0000 UTC m=+0.147133676 container init 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:03:52 compute-0 podman[90882]: 2026-01-20 19:03:52.949943528 +0000 UTC m=+0.154283726 container start 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:52 compute-0 podman[90882]: 2026-01-20 19:03:52.953967284 +0000 UTC m=+0.158307492 container attach 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:03:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Jan 20 19:03:53 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3858958607' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Jan 20 19:03:53 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3858958607' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Jan 20 19:03:53 compute-0 loving_germain[90897]: pool 'cephfs.cephfs.data' created
Jan 20 19:03:53 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Jan 20 19:03:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=22/23 n=0 ec=22/22 lis/c=0/0 les/c/f=0/0/0 sis=22) [0] r=0 lpr=22 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:03:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 23 pg[7.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:03:53 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3858958607' entity='client.admin' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} : dispatch
Jan 20 19:03:53 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3858958607' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Jan 20 19:03:53 compute-0 ceph-mon[75120]: osdmap e23: 3 total, 3 up, 3 in
Jan 20 19:03:53 compute-0 systemd[1]: libpod-74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c.scope: Deactivated successfully.
Jan 20 19:03:53 compute-0 podman[90882]: 2026-01-20 19:03:53.458690817 +0000 UTC m=+0.663031045 container died 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:03:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e23 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-07fe11634e6aeb11000cf5b7d296a56cce13c5c4a678725cf2a73841d8e7b20b-merged.mount: Deactivated successfully.
Jan 20 19:03:53 compute-0 podman[90882]: 2026-01-20 19:03:53.499902869 +0000 UTC m=+0.704243057 container remove 74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c (image=quay.io/ceph/ceph:v20, name=loving_germain, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 19:03:53 compute-0 systemd[1]: libpod-conmon-74d64b3e6c6e4c902033362b006ba3fcf966d8697a9dabc83e3e61d1c1e0ab7c.scope: Deactivated successfully.
Jan 20 19:03:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 3 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:53 compute-0 sudo[90879]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:53 compute-0 sudo[90958]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgeljvxdvaxdpwpchbsleknkxnzrjgoy ; /usr/bin/python3'
Jan 20 19:03:53 compute-0 sudo[90958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:53 compute-0 python3[90960]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:53 compute-0 podman[90961]: 2026-01-20 19:03:53.889346745 +0000 UTC m=+0.044611653 container create 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:03:53 compute-0 systemd[1]: Started libpod-conmon-3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b.scope.
Jan 20 19:03:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2415667277a05059cad56aeb6ed849a2f8acf38f6bd98b183472f5e6805cb8e7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2415667277a05059cad56aeb6ed849a2f8acf38f6bd98b183472f5e6805cb8e7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:53 compute-0 podman[90961]: 2026-01-20 19:03:53.869794229 +0000 UTC m=+0.025059187 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:53 compute-0 podman[90961]: 2026-01-20 19:03:53.966589876 +0000 UTC m=+0.121854804 container init 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:03:53 compute-0 podman[90961]: 2026-01-20 19:03:53.971138664 +0000 UTC m=+0.126403572 container start 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:03:53 compute-0 podman[90961]: 2026-01-20 19:03:53.974100395 +0000 UTC m=+0.129365303 container attach 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:03:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Jan 20 19:03:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1189008687' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 20 19:03:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Jan 20 19:03:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1189008687' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 19:03:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Jan 20 19:03:54 compute-0 hardcore_matsumoto[90976]: enabled application 'rbd' on pool 'vms'
Jan 20 19:03:54 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Jan 20 19:03:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [1] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:03:54 compute-0 systemd[1]: libpod-3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b.scope: Deactivated successfully.
Jan 20 19:03:54 compute-0 podman[90961]: 2026-01-20 19:03:54.458267628 +0000 UTC m=+0.613532536 container died 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:54 compute-0 ceph-mon[75120]: pgmap v55: 7 pgs: 3 unknown, 2 creating+peering, 2 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:54 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1189008687' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} : dispatch
Jan 20 19:03:54 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1189008687' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Jan 20 19:03:54 compute-0 ceph-mon[75120]: osdmap e24: 3 total, 3 up, 3 in
Jan 20 19:03:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-2415667277a05059cad56aeb6ed849a2f8acf38f6bd98b183472f5e6805cb8e7-merged.mount: Deactivated successfully.
Jan 20 19:03:54 compute-0 podman[90961]: 2026-01-20 19:03:54.498868095 +0000 UTC m=+0.654133003 container remove 3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b (image=quay.io/ceph/ceph:v20, name=hardcore_matsumoto, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:54 compute-0 systemd[1]: libpod-conmon-3e73f3fdafb04110d0ae9cfacdb617e5bce10bbd08086de4091ea7a99b93566b.scope: Deactivated successfully.
Jan 20 19:03:54 compute-0 sudo[90958]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:54 compute-0 sudo[91035]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vgzqyocnzbohxdhtytyzdcpkbdpbsool ; /usr/bin/python3'
Jan 20 19:03:54 compute-0 sudo[91035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:54 compute-0 python3[91037]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:54 compute-0 podman[91038]: 2026-01-20 19:03:54.850022729 +0000 UTC m=+0.051975429 container create 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 19:03:54 compute-0 systemd[1]: Started libpod-conmon-66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905.scope.
Jan 20 19:03:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce0681b8613906dae3768166f173509c0ba2b3818ba53a533ea46d29afcbdfe/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ce0681b8613906dae3768166f173509c0ba2b3818ba53a533ea46d29afcbdfe/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:54 compute-0 podman[91038]: 2026-01-20 19:03:54.920912248 +0000 UTC m=+0.122864968 container init 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:54 compute-0 podman[91038]: 2026-01-20 19:03:54.827712348 +0000 UTC m=+0.029665098 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:54 compute-0 podman[91038]: 2026-01-20 19:03:54.927338161 +0000 UTC m=+0.129290901 container start 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 19:03:54 compute-0 podman[91038]: 2026-01-20 19:03:54.931986821 +0000 UTC m=+0.133939551 container attach 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:03:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Jan 20 19:03:55 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/789911532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 20 19:03:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Jan 20 19:03:55 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/789911532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 19:03:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Jan 20 19:03:55 compute-0 sad_liskov[91054]: enabled application 'rbd' on pool 'volumes'
Jan 20 19:03:55 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Jan 20 19:03:55 compute-0 systemd[1]: libpod-66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905.scope: Deactivated successfully.
Jan 20 19:03:55 compute-0 conmon[91054]: conmon 66af91f94602c1a678ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905.scope/container/memory.events
Jan 20 19:03:55 compute-0 podman[91038]: 2026-01-20 19:03:55.466972235 +0000 UTC m=+0.668924985 container died 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:55 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/789911532' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} : dispatch
Jan 20 19:03:55 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/789911532' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Jan 20 19:03:55 compute-0 ceph-mon[75120]: osdmap e25: 3 total, 3 up, 3 in
Jan 20 19:03:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-9ce0681b8613906dae3768166f173509c0ba2b3818ba53a533ea46d29afcbdfe-merged.mount: Deactivated successfully.
Jan 20 19:03:55 compute-0 podman[91038]: 2026-01-20 19:03:55.511086046 +0000 UTC m=+0.713038746 container remove 66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905 (image=quay.io/ceph/ceph:v20, name=sad_liskov, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:03:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:55 compute-0 systemd[1]: libpod-conmon-66af91f94602c1a678ab93ac60f2a4fda5485ac7eebdf6f9650e65c161e90905.scope: Deactivated successfully.
Jan 20 19:03:55 compute-0 sudo[91035]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:55 compute-0 sudo[91113]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpexnwprsdeaxzcvlxixxkjvfbzgzsri ; /usr/bin/python3'
Jan 20 19:03:55 compute-0 sudo[91113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:55 compute-0 python3[91115]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:55 compute-0 podman[91116]: 2026-01-20 19:03:55.900510562 +0000 UTC m=+0.091649874 container create 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:55 compute-0 podman[91116]: 2026-01-20 19:03:55.834282754 +0000 UTC m=+0.025422106 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:55 compute-0 systemd[1]: Started libpod-conmon-92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3.scope.
Jan 20 19:03:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5b19ac70930a6307f1fe155b037839c63ff2649f7896583d1de7633ba41375/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb5b19ac70930a6307f1fe155b037839c63ff2649f7896583d1de7633ba41375/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:56 compute-0 podman[91116]: 2026-01-20 19:03:56.272942234 +0000 UTC m=+0.464081556 container init 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Jan 20 19:03:56 compute-0 podman[91116]: 2026-01-20 19:03:56.279941931 +0000 UTC m=+0.471081233 container start 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 19:03:56 compute-0 podman[91116]: 2026-01-20 19:03:56.283909556 +0000 UTC m=+0.475048878 container attach 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:56 compute-0 ceph-mon[75120]: pgmap v58: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:56 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Jan 20 19:03:56 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/747428867' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 20 19:03:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Jan 20 19:03:57 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/747428867' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} : dispatch
Jan 20 19:03:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/747428867' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 19:03:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Jan 20 19:03:57 compute-0 jovial_hodgkin[91131]: enabled application 'rbd' on pool 'backups'
Jan 20 19:03:57 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Jan 20 19:03:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v60: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:57 compute-0 systemd[1]: libpod-92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3.scope: Deactivated successfully.
Jan 20 19:03:57 compute-0 podman[91116]: 2026-01-20 19:03:57.535780506 +0000 UTC m=+1.726919808 container died 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-cb5b19ac70930a6307f1fe155b037839c63ff2649f7896583d1de7633ba41375-merged.mount: Deactivated successfully.
Jan 20 19:03:57 compute-0 podman[91116]: 2026-01-20 19:03:57.579124919 +0000 UTC m=+1.770264221 container remove 92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3 (image=quay.io/ceph/ceph:v20, name=jovial_hodgkin, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:57 compute-0 systemd[1]: libpod-conmon-92e7c002611f52fd6ac7de9ee6cabd0817cea2d6e953b1144c5f9c17f48d68c3.scope: Deactivated successfully.
Jan 20 19:03:57 compute-0 sudo[91113]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:57 compute-0 sudo[91192]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvzsnsaulidrivsxzdieehnvddfcpxjt ; /usr/bin/python3'
Jan 20 19:03:57 compute-0 sudo[91192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:57 compute-0 python3[91194]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:57 compute-0 podman[91195]: 2026-01-20 19:03:57.965634996 +0000 UTC m=+0.047821510 container create 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:03:57 compute-0 systemd[1]: Started libpod-conmon-8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd.scope.
Jan 20 19:03:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ffb8a7bf8082b542b38d93d8602ae365915f8d23517453c64a1737c2df1016/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74ffb8a7bf8082b542b38d93d8602ae365915f8d23517453c64a1737c2df1016/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:58 compute-0 podman[91195]: 2026-01-20 19:03:58.028963144 +0000 UTC m=+0.111149668 container init 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 19:03:58 compute-0 podman[91195]: 2026-01-20 19:03:57.940913996 +0000 UTC m=+0.023100530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:58 compute-0 podman[91195]: 2026-01-20 19:03:58.036616466 +0000 UTC m=+0.118802980 container start 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:03:58 compute-0 podman[91195]: 2026-01-20 19:03:58.039770002 +0000 UTC m=+0.121956546 container attach 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:03:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Jan 20 19:03:58 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4156469610' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 20 19:03:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e26 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:03:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Jan 20 19:03:58 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/747428867' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Jan 20 19:03:58 compute-0 ceph-mon[75120]: osdmap e26: 3 total, 3 up, 3 in
Jan 20 19:03:58 compute-0 ceph-mon[75120]: pgmap v60: 7 pgs: 1 creating+peering, 6 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:58 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/4156469610' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} : dispatch
Jan 20 19:03:58 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4156469610' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 19:03:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Jan 20 19:03:58 compute-0 heuristic_hopper[91211]: enabled application 'rbd' on pool 'images'
Jan 20 19:03:58 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Jan 20 19:03:58 compute-0 systemd[1]: libpod-8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd.scope: Deactivated successfully.
Jan 20 19:03:58 compute-0 podman[91195]: 2026-01-20 19:03:58.536120294 +0000 UTC m=+0.618306848 container died 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:03:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-74ffb8a7bf8082b542b38d93d8602ae365915f8d23517453c64a1737c2df1016-merged.mount: Deactivated successfully.
Jan 20 19:03:58 compute-0 podman[91195]: 2026-01-20 19:03:58.585665555 +0000 UTC m=+0.667852079 container remove 8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd (image=quay.io/ceph/ceph:v20, name=heuristic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:03:58 compute-0 systemd[1]: libpod-conmon-8f73a5caefa45aff7851eee31b88f8a9a00566812e3cae122272d129fcf073bd.scope: Deactivated successfully.
Jan 20 19:03:58 compute-0 sudo[91192]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:58 compute-0 sudo[91273]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyoaexheouivfovoiprkmvnsngzffcch ; /usr/bin/python3'
Jan 20 19:03:58 compute-0 sudo[91273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:58 compute-0 python3[91275]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:58 compute-0 podman[91276]: 2026-01-20 19:03:58.913178386 +0000 UTC m=+0.050330639 container create 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:03:58 compute-0 systemd[1]: Started libpod-conmon-0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3.scope.
Jan 20 19:03:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d52e83c202da7da81ce9b353a7256a4acbf872ffc8196766ee8cab5dbb61b3f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:58 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d52e83c202da7da81ce9b353a7256a4acbf872ffc8196766ee8cab5dbb61b3f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:58 compute-0 podman[91276]: 2026-01-20 19:03:58.887425853 +0000 UTC m=+0.024578126 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:03:58 compute-0 podman[91276]: 2026-01-20 19:03:58.986551054 +0000 UTC m=+0.123703327 container init 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:58 compute-0 podman[91276]: 2026-01-20 19:03:58.991493762 +0000 UTC m=+0.128646015 container start 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:03:58 compute-0 podman[91276]: 2026-01-20 19:03:58.994653738 +0000 UTC m=+0.131805991 container attach 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Jan 20 19:03:59 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2777962107' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 20 19:03:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Jan 20 19:03:59 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/4156469610' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Jan 20 19:03:59 compute-0 ceph-mon[75120]: osdmap e27: 3 total, 3 up, 3 in
Jan 20 19:03:59 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2777962107' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} : dispatch
Jan 20 19:03:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:03:59 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2777962107' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 19:03:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Jan 20 19:03:59 compute-0 ecstatic_chebyshev[91292]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Jan 20 19:03:59 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Jan 20 19:03:59 compute-0 systemd[1]: libpod-0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3.scope: Deactivated successfully.
Jan 20 19:03:59 compute-0 conmon[91292]: conmon 0aee58d34afa3691208d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3.scope/container/memory.events
Jan 20 19:03:59 compute-0 podman[91276]: 2026-01-20 19:03:59.552078426 +0000 UTC m=+0.689230679 container died 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-9d52e83c202da7da81ce9b353a7256a4acbf872ffc8196766ee8cab5dbb61b3f-merged.mount: Deactivated successfully.
Jan 20 19:03:59 compute-0 podman[91276]: 2026-01-20 19:03:59.598175614 +0000 UTC m=+0.735327867 container remove 0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3 (image=quay.io/ceph/ceph:v20, name=ecstatic_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:03:59 compute-0 systemd[1]: libpod-conmon-0aee58d34afa3691208df5f864720a806c85fffb639031d8f97f550d7185bae3.scope: Deactivated successfully.
Jan 20 19:03:59 compute-0 sudo[91273]: pam_unix(sudo:session): session closed for user root
Jan 20 19:03:59 compute-0 sudo[91351]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifnzgeorphnfcqtbfnusprptjegzbcaa ; /usr/bin/python3'
Jan 20 19:03:59 compute-0 sudo[91351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:03:59 compute-0 python3[91353]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:03:59 compute-0 podman[91354]: 2026-01-20 19:03:59.930150051 +0000 UTC m=+0.039052571 container create 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 19:03:59 compute-0 systemd[1]: Started libpod-conmon-52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209.scope.
Jan 20 19:03:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7470744e9363f4280d001fba692992eda8ef033d59348a109fbcc813f1ffa1/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:03:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab7470744e9363f4280d001fba692992eda8ef033d59348a109fbcc813f1ffa1/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:00 compute-0 podman[91354]: 2026-01-20 19:04:00.003695793 +0000 UTC m=+0.112598333 container init 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 19:04:00 compute-0 podman[91354]: 2026-01-20 19:04:00.008883267 +0000 UTC m=+0.117785827 container start 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 19:04:00 compute-0 podman[91354]: 2026-01-20 19:03:59.913564546 +0000 UTC m=+0.022467096 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:00 compute-0 podman[91354]: 2026-01-20 19:04:00.013300482 +0000 UTC m=+0.122203022 container attach 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Jan 20 19:04:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3772668256' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 20 19:04:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Jan 20 19:04:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3772668256' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 19:04:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Jan 20 19:04:00 compute-0 quirky_wing[91369]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Jan 20 19:04:00 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Jan 20 19:04:00 compute-0 ceph-mon[75120]: pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:00 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2777962107' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Jan 20 19:04:00 compute-0 ceph-mon[75120]: osdmap e28: 3 total, 3 up, 3 in
Jan 20 19:04:00 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3772668256' entity='client.admin' cmd={"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} : dispatch
Jan 20 19:04:00 compute-0 systemd[1]: libpod-52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209.scope: Deactivated successfully.
Jan 20 19:04:00 compute-0 podman[91354]: 2026-01-20 19:04:00.57491687 +0000 UTC m=+0.683819390 container died 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:04:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-ab7470744e9363f4280d001fba692992eda8ef033d59348a109fbcc813f1ffa1-merged.mount: Deactivated successfully.
Jan 20 19:04:00 compute-0 podman[91354]: 2026-01-20 19:04:00.620555857 +0000 UTC m=+0.729458387 container remove 52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209 (image=quay.io/ceph/ceph:v20, name=quirky_wing, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 19:04:00 compute-0 systemd[1]: libpod-conmon-52371658b586f13db715ae5461c0688686ee040bedcd30a1ca33b3d03a357209.scope: Deactivated successfully.
Jan 20 19:04:00 compute-0 sudo[91351]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:01 compute-0 python3[91481]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:04:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:01 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3772668256' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Jan 20 19:04:01 compute-0 ceph-mon[75120]: osdmap e29: 3 total, 3 up, 3 in
Jan 20 19:04:01 compute-0 python3[91552]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935841.2559671-36589-95521937731709/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:04:02 compute-0 sudo[91652]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkyeifbmpycaopvhsvqouqmjlbsgkesk ; /usr/bin/python3'
Jan 20 19:04:02 compute-0 sudo[91652]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:02 compute-0 python3[91654]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:04:02 compute-0 sudo[91652]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:02 compute-0 ceph-mon[75120]: pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:02 compute-0 sudo[91727]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibgekabvbecbnfagsidxarsximzggboq ; /usr/bin/python3'
Jan 20 19:04:02 compute-0 sudo[91727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:02 compute-0 python3[91729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935842.1561182-36603-218795273964965/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=6e4615d43abe95e636d62123fc987968919dda9e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:04:02 compute-0 sudo[91727]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:03 compute-0 sudo[91777]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmcwqolwvaysecahxsbbygavtwtgeyuc ; /usr/bin/python3'
Jan 20 19:04:03 compute-0 sudo[91777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:03 compute-0 python3[91779]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:03 compute-0 podman[91780]: 2026-01-20 19:04:03.30323791 +0000 UTC m=+0.053048915 container create b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:04:03 compute-0 systemd[1]: Started libpod-conmon-b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac.scope.
Jan 20 19:04:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2192d83717af1e137250aeb2f6bc2b79e3c846c68a80f25b0ffa1d64e983e517/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2192d83717af1e137250aeb2f6bc2b79e3c846c68a80f25b0ffa1d64e983e517/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2192d83717af1e137250aeb2f6bc2b79e3c846c68a80f25b0ffa1d64e983e517/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:03 compute-0 podman[91780]: 2026-01-20 19:04:03.283657574 +0000 UTC m=+0.033468579 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:03 compute-0 podman[91780]: 2026-01-20 19:04:03.38762825 +0000 UTC m=+0.137439265 container init b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:04:03 compute-0 podman[91780]: 2026-01-20 19:04:03.394430592 +0000 UTC m=+0.144241577 container start b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 19:04:03 compute-0 podman[91780]: 2026-01-20 19:04:03.398196642 +0000 UTC m=+0.148007627 container attach b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Jan 20 19:04:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2750738983' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 19:04:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2750738983' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 19:04:03 compute-0 modest_kapitsa[91795]: 
Jan 20 19:04:03 compute-0 modest_kapitsa[91795]: [global]
Jan 20 19:04:03 compute-0 modest_kapitsa[91795]:         fsid = 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:04:03 compute-0 modest_kapitsa[91795]:         mon_host = 192.168.122.100
Jan 20 19:04:03 compute-0 modest_kapitsa[91795]:         rgw_keystone_api_version = 3
Jan 20 19:04:03 compute-0 systemd[1]: libpod-b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac.scope: Deactivated successfully.
Jan 20 19:04:03 compute-0 podman[91780]: 2026-01-20 19:04:03.842751941 +0000 UTC m=+0.592562926 container died b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-2192d83717af1e137250aeb2f6bc2b79e3c846c68a80f25b0ffa1d64e983e517-merged.mount: Deactivated successfully.
Jan 20 19:04:03 compute-0 podman[91780]: 2026-01-20 19:04:03.880609873 +0000 UTC m=+0.630420858 container remove b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac (image=quay.io/ceph/ceph:v20, name=modest_kapitsa, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:03 compute-0 sudo[91820]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:03 compute-0 sudo[91820]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:03 compute-0 sudo[91820]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:03 compute-0 systemd[1]: libpod-conmon-b3c89a4508e791839fdc3ef20eaf866f25ee4dcb03fc2a47d3bc389fd4116dac.scope: Deactivated successfully.
Jan 20 19:04:03 compute-0 sudo[91777]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:03 compute-0 sudo[91857]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:04:03 compute-0 sudo[91857]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:04 compute-0 sudo[91905]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zoebgtbvmmrcvofohoelewnmtasdumzk ; /usr/bin/python3'
Jan 20 19:04:04 compute-0 sudo[91905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:04 compute-0 python3[91907]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:04 compute-0 podman[91931]: 2026-01-20 19:04:04.276959664 +0000 UTC m=+0.050149375 container create 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:04:04 compute-0 systemd[1]: Started libpod-conmon-995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30.scope.
Jan 20 19:04:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe6ef15523479e0f71ca53572883930ea680854fb8867b6f3ba412072cd2cbc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe6ef15523479e0f71ca53572883930ea680854fb8867b6f3ba412072cd2cbc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3fe6ef15523479e0f71ca53572883930ea680854fb8867b6f3ba412072cd2cbc/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:04 compute-0 podman[91931]: 2026-01-20 19:04:04.252152544 +0000 UTC m=+0.025342285 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:04 compute-0 podman[91931]: 2026-01-20 19:04:04.355979137 +0000 UTC m=+0.129168858 container init 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 19:04:04 compute-0 podman[91931]: 2026-01-20 19:04:04.362038881 +0000 UTC m=+0.135228582 container start 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 19:04:04 compute-0 podman[91931]: 2026-01-20 19:04:04.366568549 +0000 UTC m=+0.139758270 container attach 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:04:04 compute-0 podman[91962]: 2026-01-20 19:04:04.391539704 +0000 UTC m=+0.071812502 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:04 compute-0 podman[91962]: 2026-01-20 19:04:04.513458398 +0000 UTC m=+0.193731226 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:04:04 compute-0 ceph-mon[75120]: pgmap v66: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:04 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2750738983' entity='client.admin' cmd={"prefix": "config assimilate-conf"} : dispatch
Jan 20 19:04:04 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2750738983' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Jan 20 19:04:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Jan 20 19:04:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3688985674' entity='client.admin' 
Jan 20 19:04:04 compute-0 admiring_liskov[91964]: set ssl_option
Jan 20 19:04:04 compute-0 systemd[1]: libpod-995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30.scope: Deactivated successfully.
Jan 20 19:04:04 compute-0 conmon[91964]: conmon 995faee01cb89e1bf260 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30.scope/container/memory.events
Jan 20 19:04:04 compute-0 podman[91931]: 2026-01-20 19:04:04.94086434 +0000 UTC m=+0.714054051 container died 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:04:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-3fe6ef15523479e0f71ca53572883930ea680854fb8867b6f3ba412072cd2cbc-merged.mount: Deactivated successfully.
Jan 20 19:04:04 compute-0 podman[91931]: 2026-01-20 19:04:04.978249399 +0000 UTC m=+0.751439100 container remove 995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30 (image=quay.io/ceph/ceph:v20, name=admiring_liskov, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 19:04:04 compute-0 systemd[1]: libpod-conmon-995faee01cb89e1bf260da2f879355f3c25e9762a55847a44256c93498ffec30.scope: Deactivated successfully.
Jan 20 19:04:05 compute-0 sudo[91905]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:05 compute-0 sudo[92140]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wddjrojmslgpvonhjtkmkqyfxznwohmv ; /usr/bin/python3'
Jan 20 19:04:05 compute-0 sudo[92140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:05 compute-0 sudo[91857]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:05 compute-0 python3[92147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:05 compute-0 podman[92173]: 2026-01-20 19:04:05.382881978 +0000 UTC m=+0.043546288 container create b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:05 compute-0 sudo[92174]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:05 compute-0 sudo[92174]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:05 compute-0 sudo[92174]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:05 compute-0 systemd[1]: Started libpod-conmon-b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999.scope.
Jan 20 19:04:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15813e9c6b0b2aec89aae91e56a52815db237dedc77c467701261889dd69467b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15813e9c6b0b2aec89aae91e56a52815db237dedc77c467701261889dd69467b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/15813e9c6b0b2aec89aae91e56a52815db237dedc77c467701261889dd69467b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:05 compute-0 podman[92173]: 2026-01-20 19:04:05.365409132 +0000 UTC m=+0.026073462 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:05 compute-0 sudo[92214]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:04:05 compute-0 sudo[92214]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:05 compute-0 podman[92173]: 2026-01-20 19:04:05.470480795 +0000 UTC m=+0.131145115 container init b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:04:05 compute-0 podman[92173]: 2026-01-20 19:04:05.478840694 +0000 UTC m=+0.139504994 container start b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:04:05 compute-0 podman[92173]: 2026-01-20 19:04:05.482811529 +0000 UTC m=+0.143475839 container attach b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:04:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:05 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3688985674' entity='client.admin' 
Jan 20 19:04:05 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:05 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:05 compute-0 ceph-mon[75120]: pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:05 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:04:05 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Jan 20 19:04:05 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 20 19:04:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 19:04:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:05 compute-0 youthful_faraday[92222]: Scheduled rgw.rgw update...
Jan 20 19:04:05 compute-0 systemd[1]: libpod-b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999.scope: Deactivated successfully.
Jan 20 19:04:05 compute-0 podman[92173]: 2026-01-20 19:04:05.95178565 +0000 UTC m=+0.612449960 container died b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-15813e9c6b0b2aec89aae91e56a52815db237dedc77c467701261889dd69467b-merged.mount: Deactivated successfully.
Jan 20 19:04:06 compute-0 podman[92173]: 2026-01-20 19:04:06.002754304 +0000 UTC m=+0.663418634 container remove b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999 (image=quay.io/ceph/ceph:v20, name=youthful_faraday, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:06 compute-0 systemd[1]: libpod-conmon-b189a3ea3a04604ca4b3fc14d7961731cae244f3456384b1d19cdb88a199f999.scope: Deactivated successfully.
Jan 20 19:04:06 compute-0 sudo[92140]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:06 compute-0 sudo[92214]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:06 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:04:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:04:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:04:06 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:04:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:06 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:06 compute-0 sudo[92307]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:06 compute-0 sudo[92307]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:06 compute-0 sudo[92307]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:06 compute-0 sudo[92332]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:04:06 compute-0 sudo[92332]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:06 compute-0 podman[92369]: 2026-01-20 19:04:06.515972379 +0000 UTC m=+0.049497690 container create 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:04:06 compute-0 systemd[1]: Started libpod-conmon-81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d.scope.
Jan 20 19:04:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:06 compute-0 podman[92369]: 2026-01-20 19:04:06.495318537 +0000 UTC m=+0.028843888 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:06 compute-0 podman[92369]: 2026-01-20 19:04:06.60544798 +0000 UTC m=+0.138973311 container init 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 19:04:06 compute-0 podman[92369]: 2026-01-20 19:04:06.611536136 +0000 UTC m=+0.145061447 container start 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:06 compute-0 podman[92369]: 2026-01-20 19:04:06.615990732 +0000 UTC m=+0.149516063 container attach 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:06 compute-0 hopeful_curran[92385]: 167 167
Jan 20 19:04:06 compute-0 systemd[1]: libpod-81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d.scope: Deactivated successfully.
Jan 20 19:04:06 compute-0 podman[92369]: 2026-01-20 19:04:06.61967381 +0000 UTC m=+0.153199121 container died 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-5b99b9a6c8f115c6cd1482734075d3778795515029ed16e52dd97eac82f93084-merged.mount: Deactivated successfully.
Jan 20 19:04:06 compute-0 podman[92369]: 2026-01-20 19:04:06.66461223 +0000 UTC m=+0.198137581 container remove 81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 19:04:06 compute-0 systemd[1]: libpod-conmon-81fbe976cf9d140fd7328d358393f99a1f235d9017c03af245d6a6dc5c2a4e2d.scope: Deactivated successfully.
Jan 20 19:04:06 compute-0 podman[92460]: 2026-01-20 19:04:06.83337555 +0000 UTC m=+0.042753439 container create 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 19:04:06 compute-0 systemd[1]: Started libpod-conmon-4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce.scope.
Jan 20 19:04:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:06 compute-0 podman[92460]: 2026-01-20 19:04:06.812500472 +0000 UTC m=+0.021878391 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:06 compute-0 podman[92460]: 2026-01-20 19:04:06.910307082 +0000 UTC m=+0.119684971 container init 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:06 compute-0 podman[92460]: 2026-01-20 19:04:06.916982481 +0000 UTC m=+0.126360370 container start 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:06 compute-0 podman[92460]: 2026-01-20 19:04:06.920642879 +0000 UTC m=+0.130020788 container attach 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:04:06 compute-0 ceph-mon[75120]: from='client.14236 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: Saving service rgw.rgw spec with placement compute-0
Jan 20 19:04:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:04:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:06 compute-0 python3[92497]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:04:07 compute-0 python3[92577]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935846.6988952-36644-100073787657742/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:04:07 compute-0 xenodochial_goldberg[92500]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:04:07 compute-0 xenodochial_goldberg[92500]: --> All data devices are unavailable
Jan 20 19:04:07 compute-0 systemd[1]: libpod-4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce.scope: Deactivated successfully.
Jan 20 19:04:07 compute-0 podman[92460]: 2026-01-20 19:04:07.436398714 +0000 UTC m=+0.645776633 container died 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-a66a067890950ec90c6f740d16d45456efd54591e1fbc956e87d614358232c59-merged.mount: Deactivated successfully.
Jan 20 19:04:07 compute-0 podman[92460]: 2026-01-20 19:04:07.484435068 +0000 UTC m=+0.693812947 container remove 4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:07 compute-0 systemd[1]: libpod-conmon-4349646492a3b4b2cf47d664e0de5e7c210eb8a6ff58558f6d0c60ede54967ce.scope: Deactivated successfully.
Jan 20 19:04:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:07 compute-0 sudo[92332]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:07 compute-0 sudo[92629]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:07 compute-0 sudo[92629]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:07 compute-0 sudo[92629]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:07 compute-0 sudo[92654]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:04:07 compute-0 sudo[92654]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:07 compute-0 sudo[92702]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-milrljwjhosvkvbjlajbgvpfgaqyiauc ; /usr/bin/python3'
Jan 20 19:04:07 compute-0 sudo[92702]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:07 compute-0 python3[92704]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '
                                           _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:07 compute-0 podman[92705]: 2026-01-20 19:04:07.89759438 +0000 UTC m=+0.048517186 container create 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:07 compute-0 ceph-mon[75120]: pgmap v68: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:07 compute-0 systemd[1]: Started libpod-conmon-667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c.scope.
Jan 20 19:04:07 compute-0 podman[92705]: 2026-01-20 19:04:07.875237467 +0000 UTC m=+0.026160303 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:07 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c2f478feb18d070006d399d29c36c467e20c134dca4901a1e9fc3464950a9e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c2f478feb18d070006d399d29c36c467e20c134dca4901a1e9fc3464950a9e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c2f478feb18d070006d399d29c36c467e20c134dca4901a1e9fc3464950a9e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:07 compute-0 podman[92732]: 2026-01-20 19:04:07.986863776 +0000 UTC m=+0.046077908 container create 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:08 compute-0 podman[92705]: 2026-01-20 19:04:08.000697756 +0000 UTC m=+0.151620562 container init 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:08 compute-0 podman[92705]: 2026-01-20 19:04:08.016339949 +0000 UTC m=+0.167262755 container start 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:08 compute-0 podman[92705]: 2026-01-20 19:04:08.01975052 +0000 UTC m=+0.170673416 container attach 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:04:08 compute-0 systemd[1]: Started libpod-conmon-8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb.scope.
Jan 20 19:04:08 compute-0 podman[92732]: 2026-01-20 19:04:07.966922632 +0000 UTC m=+0.026136804 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:08 compute-0 podman[92732]: 2026-01-20 19:04:08.085000994 +0000 UTC m=+0.144215166 container init 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Jan 20 19:04:08 compute-0 podman[92732]: 2026-01-20 19:04:08.095383772 +0000 UTC m=+0.154597924 container start 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:08 compute-0 flamboyant_wiles[92753]: 167 167
Jan 20 19:04:08 compute-0 podman[92732]: 2026-01-20 19:04:08.099331875 +0000 UTC m=+0.158546007 container attach 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:08 compute-0 systemd[1]: libpod-8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb.scope: Deactivated successfully.
Jan 20 19:04:08 compute-0 podman[92732]: 2026-01-20 19:04:08.101305743 +0000 UTC m=+0.160519915 container died 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-f1f8883fc97305fe402c747789f2194c5951f0ae8f59c6dc688a13bbd5bcb8ce-merged.mount: Deactivated successfully.
Jan 20 19:04:08 compute-0 podman[92732]: 2026-01-20 19:04:08.154744666 +0000 UTC m=+0.213958828 container remove 8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_wiles, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:04:08 compute-0 systemd[1]: libpod-conmon-8aea4f87d0003e9e1df76372370ab5d919ea57d4c7c8fdf90834472af975cfbb.scope: Deactivated successfully.
Jan 20 19:04:08 compute-0 podman[92795]: 2026-01-20 19:04:08.347814525 +0000 UTC m=+0.051000827 container create 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:04:08 compute-0 systemd[1]: Started libpod-conmon-3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34.scope.
Jan 20 19:04:08 compute-0 podman[92795]: 2026-01-20 19:04:08.327941271 +0000 UTC m=+0.031127593 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e29 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:08 compute-0 podman[92795]: 2026-01-20 19:04:08.480922555 +0000 UTC m=+0.184108857 container init 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:04:08 compute-0 podman[92795]: 2026-01-20 19:04:08.490585006 +0000 UTC m=+0.193771308 container start 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:04:08 compute-0 podman[92795]: 2026-01-20 19:04:08.494573261 +0000 UTC m=+0.197759573 container attach 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:08 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:04:08 compute-0 ceph-mgr[75417]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 19:04:08 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0[75116]: 2026-01-20T19:04:08.498+0000 7f03ac18a640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e2 new map
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e2 print_map
                                           e2
                                           btime 2026-01-20T19:04:08:498809+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T19:04:08.498557+0000
                                           modified        2026-01-20T19:04:08.498557+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Jan 20 19:04:08 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 20 19:04:08 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 20 19:04:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:08 compute-0 ceph-mgr[75417]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Jan 20 19:04:08 compute-0 systemd[1]: libpod-667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c.scope: Deactivated successfully.
Jan 20 19:04:08 compute-0 podman[92705]: 2026-01-20 19:04:08.541457417 +0000 UTC m=+0.692380223 container died 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-e1c2f478feb18d070006d399d29c36c467e20c134dca4901a1e9fc3464950a9e-merged.mount: Deactivated successfully.
Jan 20 19:04:08 compute-0 podman[92705]: 2026-01-20 19:04:08.588541609 +0000 UTC m=+0.739464425 container remove 667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c (image=quay.io/ceph/ceph:v20, name=hardcore_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:04:08 compute-0 systemd[1]: libpod-conmon-667676c507e859182f91b37675a7e46c8b8db2f11be50946ac0fa9bf1e85492c.scope: Deactivated successfully.
Jan 20 19:04:08 compute-0 sudo[92702]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:08 compute-0 sudo[92855]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgnwdkfsczxzyuwtyxtpedrblnnwjtkr ; /usr/bin/python3'
Jan 20 19:04:08 compute-0 sudo[92855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:08 compute-0 kind_lewin[92811]: {
Jan 20 19:04:08 compute-0 kind_lewin[92811]:     "0": [
Jan 20 19:04:08 compute-0 kind_lewin[92811]:         {
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "devices": [
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "/dev/loop3"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             ],
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_name": "ceph_lv0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_size": "21470642176",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "name": "ceph_lv0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "tags": {
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.crush_device_class": "",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.encrypted": "0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osd_id": "0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.type": "block",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.vdo": "0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.with_tpm": "0"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             },
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "type": "block",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "vg_name": "ceph_vg0"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:         }
Jan 20 19:04:08 compute-0 kind_lewin[92811]:     ],
Jan 20 19:04:08 compute-0 kind_lewin[92811]:     "1": [
Jan 20 19:04:08 compute-0 kind_lewin[92811]:         {
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "devices": [
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "/dev/loop4"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             ],
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_name": "ceph_lv1",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_size": "21470642176",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "name": "ceph_lv1",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "tags": {
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.crush_device_class": "",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.encrypted": "0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osd_id": "1",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.type": "block",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.vdo": "0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.with_tpm": "0"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             },
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "type": "block",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "vg_name": "ceph_vg1"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:         }
Jan 20 19:04:08 compute-0 kind_lewin[92811]:     ],
Jan 20 19:04:08 compute-0 kind_lewin[92811]:     "2": [
Jan 20 19:04:08 compute-0 kind_lewin[92811]:         {
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "devices": [
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "/dev/loop5"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             ],
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_name": "ceph_lv2",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_size": "21470642176",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "name": "ceph_lv2",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "tags": {
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.crush_device_class": "",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.encrypted": "0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osd_id": "2",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.type": "block",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.vdo": "0",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:                 "ceph.with_tpm": "0"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             },
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "type": "block",
Jan 20 19:04:08 compute-0 kind_lewin[92811]:             "vg_name": "ceph_vg2"
Jan 20 19:04:08 compute-0 kind_lewin[92811]:         }
Jan 20 19:04:08 compute-0 kind_lewin[92811]:     ]
Jan 20 19:04:08 compute-0 kind_lewin[92811]: }
Jan 20 19:04:08 compute-0 systemd[1]: libpod-3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34.scope: Deactivated successfully.
Jan 20 19:04:08 compute-0 podman[92795]: 2026-01-20 19:04:08.881384774 +0000 UTC m=+0.584571076 container died 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-59c2f9d1b17b0f8475f6445b63c9ef65b606f19286a1d6b733d035262ec369a7-merged.mount: Deactivated successfully.
Jan 20 19:04:08 compute-0 podman[92795]: 2026-01-20 19:04:08.945776328 +0000 UTC m=+0.648962630 container remove 3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_lewin, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: from='client.14238 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:04:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} : dispatch
Jan 20 19:04:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} : dispatch
Jan 20 19:04:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} : dispatch
Jan 20 19:04:08 compute-0 ceph-mon[75120]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Jan 20 19:04:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Jan 20 19:04:08 compute-0 ceph-mon[75120]: osdmap e30: 3 total, 3 up, 3 in
Jan 20 19:04:08 compute-0 ceph-mon[75120]: fsmap cephfs:0
Jan 20 19:04:08 compute-0 ceph-mon[75120]: Saving service mds.cephfs spec with placement compute-0
Jan 20 19:04:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:08 compute-0 systemd[1]: libpod-conmon-3192266017a3a808fef67a3703d9774a42e43d8111e1fa6a7a66f4d524938d34.scope: Deactivated successfully.
Jan 20 19:04:08 compute-0 python3[92858]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:08 compute-0 sudo[92654]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:09 compute-0 podman[92872]: 2026-01-20 19:04:09.053606777 +0000 UTC m=+0.056330673 container create d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 19:04:09 compute-0 systemd[1]: Started libpod-conmon-d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1.scope.
Jan 20 19:04:09 compute-0 sudo[92878]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:09 compute-0 sudo[92878]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:09 compute-0 sudo[92878]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:09 compute-0 podman[92872]: 2026-01-20 19:04:09.029923943 +0000 UTC m=+0.032647839 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb700e23931605e36064bd3c59b9ecd8da5796fbdb4b1737d6a41de542042f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb700e23931605e36064bd3c59b9ecd8da5796fbdb4b1737d6a41de542042f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0fb700e23931605e36064bd3c59b9ecd8da5796fbdb4b1737d6a41de542042f/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:09 compute-0 podman[92872]: 2026-01-20 19:04:09.168815811 +0000 UTC m=+0.171539717 container init d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:09 compute-0 podman[92872]: 2026-01-20 19:04:09.175709675 +0000 UTC m=+0.178433561 container start d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:04:09 compute-0 podman[92872]: 2026-01-20 19:04:09.179375173 +0000 UTC m=+0.182099059 container attach d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:09 compute-0 sudo[92914]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:04:09 compute-0 sudo[92914]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:09 compute-0 podman[92969]: 2026-01-20 19:04:09.514942896 +0000 UTC m=+0.056012516 container create a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:09 compute-0 systemd[1]: Started libpod-conmon-a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177.scope.
Jan 20 19:04:09 compute-0 podman[92969]: 2026-01-20 19:04:09.488384813 +0000 UTC m=+0.029454443 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:09 compute-0 podman[92969]: 2026-01-20 19:04:09.624636619 +0000 UTC m=+0.165706219 container init a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:04:09 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:04:09 compute-0 ceph-mgr[75417]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Jan 20 19:04:09 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Jan 20 19:04:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 19:04:09 compute-0 podman[92969]: 2026-01-20 19:04:09.632307572 +0000 UTC m=+0.173377182 container start a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:09 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:09 compute-0 podman[92969]: 2026-01-20 19:04:09.638994411 +0000 UTC m=+0.180064011 container attach a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 19:04:09 compute-0 pedantic_lamport[92985]: 167 167
Jan 20 19:04:09 compute-0 crazy_stonebraker[92910]: Scheduled mds.cephfs update...
Jan 20 19:04:09 compute-0 systemd[1]: libpod-a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177.scope: Deactivated successfully.
Jan 20 19:04:09 compute-0 podman[92969]: 2026-01-20 19:04:09.641565062 +0000 UTC m=+0.182634742 container died a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:09 compute-0 systemd[1]: libpod-d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1.scope: Deactivated successfully.
Jan 20 19:04:09 compute-0 podman[92872]: 2026-01-20 19:04:09.670094652 +0000 UTC m=+0.672818538 container died d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 19:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-e0eae0b264fdcc54bf8c8388619e14aa7beaf55195d423ebc54c728ad15fd682-merged.mount: Deactivated successfully.
Jan 20 19:04:09 compute-0 podman[92969]: 2026-01-20 19:04:09.72878275 +0000 UTC m=+0.269852330 container remove a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:09 compute-0 systemd[1]: libpod-conmon-a852b162eb6ec6654ced103d254aca2bfc932b4c869bb91839f0ab82e59ca177.scope: Deactivated successfully.
Jan 20 19:04:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-b0fb700e23931605e36064bd3c59b9ecd8da5796fbdb4b1737d6a41de542042f-merged.mount: Deactivated successfully.
Jan 20 19:04:09 compute-0 podman[92872]: 2026-01-20 19:04:09.775139164 +0000 UTC m=+0.777863050 container remove d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1 (image=quay.io/ceph/ceph:v20, name=crazy_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:09 compute-0 systemd[1]: libpod-conmon-d3263fa00844fde62705ce7a25c2948969f7633f0715dabbf148300679b393e1.scope: Deactivated successfully.
Jan 20 19:04:09 compute-0 sudo[92855]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:09 compute-0 podman[93023]: 2026-01-20 19:04:09.916731807 +0000 UTC m=+0.045921725 container create 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:09 compute-0 systemd[1]: Started libpod-conmon-3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100.scope.
Jan 20 19:04:09 compute-0 podman[93023]: 2026-01-20 19:04:09.895278876 +0000 UTC m=+0.024468824 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:10 compute-0 podman[93023]: 2026-01-20 19:04:10.014115597 +0000 UTC m=+0.143305545 container init 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 19:04:10 compute-0 podman[93023]: 2026-01-20 19:04:10.026667816 +0000 UTC m=+0.155857724 container start 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True)
Jan 20 19:04:10 compute-0 podman[93023]: 2026-01-20 19:04:10.030483727 +0000 UTC m=+0.159673685 container attach 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 19:04:10 compute-0 ceph-mon[75120]: pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:10 compute-0 ceph-mon[75120]: from='client.14240 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:04:10 compute-0 ceph-mon[75120]: Saving service mds.cephfs spec with placement compute-0
Jan 20 19:04:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:10 compute-0 sudo[93189]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ifgupirlwaghzlofuxjstrfztnyshsel ; /usr/bin/python3'
Jan 20 19:04:10 compute-0 sudo[93189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:10 compute-0 lvm[93191]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:04:10 compute-0 lvm[93191]: VG ceph_vg0 finished
Jan 20 19:04:10 compute-0 lvm[93195]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:04:10 compute-0 lvm[93195]: VG ceph_vg1 finished
Jan 20 19:04:10 compute-0 lvm[93198]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:04:10 compute-0 lvm[93198]: VG ceph_vg2 finished
Jan 20 19:04:10 compute-0 zealous_dewdney[93039]: {}
Jan 20 19:04:10 compute-0 python3[93196]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 20 19:04:10 compute-0 sudo[93189]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:11 compute-0 systemd[1]: libpod-3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100.scope: Deactivated successfully.
Jan 20 19:04:11 compute-0 systemd[1]: libpod-3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100.scope: Consumed 1.537s CPU time.
Jan 20 19:04:11 compute-0 podman[93023]: 2026-01-20 19:04:11.011628348 +0000 UTC m=+1.140818296 container died 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 19:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-bd5ce1a5e7bac3679f2b2b7772473ec0dce3ecf95d2da3c3c1da18f16129519c-merged.mount: Deactivated successfully.
Jan 20 19:04:11 compute-0 podman[93023]: 2026-01-20 19:04:11.058568506 +0000 UTC m=+1.187758414 container remove 3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 19:04:11 compute-0 systemd[1]: libpod-conmon-3dfb31e38a7148a923606bfc8bbfaedcd5250b343c2c31437916f1a611958100.scope: Deactivated successfully.
Jan 20 19:04:11 compute-0 sudo[92914]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:11 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 00eae6c6-6555-4af4-a1e9-816474e5931f (Updating rgw.rgw deployment (+1 -> 1))
Jan 20 19:04:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Jan 20 19:04:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 20 19:04:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 19:04:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Jan 20 19:04:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:11 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:11 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dbzrzk on compute-0
Jan 20 19:04:11 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dbzrzk on compute-0
Jan 20 19:04:11 compute-0 sudo[93258]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:11 compute-0 sudo[93258]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:11 compute-0 sudo[93304]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zufuwygazflgfbibwtchcaklykcqjpjq ; /usr/bin/python3'
Jan 20 19:04:11 compute-0 sudo[93304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:11 compute-0 sudo[93258]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:11 compute-0 sudo[93309]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:04:11 compute-0 sudo[93309]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:11 compute-0 python3[93308]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768935850.6741452-36696-191857578290303/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=82f4fc7876a2f5ec58c3b05a59c81182fa299df3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:04:11 compute-0 sudo[93304]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:11 compute-0 podman[93401]: 2026-01-20 19:04:11.731980207 +0000 UTC m=+0.048016394 container create cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Jan 20 19:04:11 compute-0 sudo[93437]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jpvuwiyapplftkovzkzouohubavkufff ; /usr/bin/python3'
Jan 20 19:04:11 compute-0 sudo[93437]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:11 compute-0 systemd[1]: Started libpod-conmon-cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea.scope.
Jan 20 19:04:11 compute-0 podman[93401]: 2026-01-20 19:04:11.708941988 +0000 UTC m=+0.024978155 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:11 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:11 compute-0 podman[93401]: 2026-01-20 19:04:11.825771601 +0000 UTC m=+0.141807768 container init cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:04:11 compute-0 podman[93401]: 2026-01-20 19:04:11.832829919 +0000 UTC m=+0.148866066 container start cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle)
Jan 20 19:04:11 compute-0 podman[93401]: 2026-01-20 19:04:11.836197639 +0000 UTC m=+0.152233786 container attach cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:11 compute-0 epic_banzai[93442]: 167 167
Jan 20 19:04:11 compute-0 systemd[1]: libpod-cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea.scope: Deactivated successfully.
Jan 20 19:04:11 compute-0 podman[93401]: 2026-01-20 19:04:11.839060077 +0000 UTC m=+0.155096314 container died cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e7187c79086a57417d35fc0473df096bd16335a8263463e477d4f9160d438b7-merged.mount: Deactivated successfully.
Jan 20 19:04:11 compute-0 podman[93401]: 2026-01-20 19:04:11.893563996 +0000 UTC m=+0.209600143 container remove cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_banzai, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:11 compute-0 systemd[1]: libpod-conmon-cd1c7cc4184486551dd2ef48f852c4b850ab092e460730930935b866654b6eea.scope: Deactivated successfully.
Jan 20 19:04:11 compute-0 python3[93439]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:11 compute-0 systemd[1]: Reloading.
Jan 20 19:04:11 compute-0 podman[93458]: 2026-01-20 19:04:11.981329907 +0000 UTC m=+0.050998247 container create 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:04:12 compute-0 systemd-rc-local-generator[93505]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:04:12 compute-0 systemd-sysv-generator[93508]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:04:12 compute-0 podman[93458]: 2026-01-20 19:04:11.963298477 +0000 UTC m=+0.032966857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} : dispatch
Jan 20 19:04:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dbzrzk", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Jan 20 19:04:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:12 compute-0 ceph-mon[75120]: Deploying daemon rgw.rgw.compute-0.dbzrzk on compute-0
Jan 20 19:04:12 compute-0 ceph-mon[75120]: pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:12 compute-0 systemd[1]: Started libpod-conmon-334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5.scope.
Jan 20 19:04:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0940efd8d6a05d75cf2d85c3002c41b239dfda26b78d634a0353f60f1df0d31/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0940efd8d6a05d75cf2d85c3002c41b239dfda26b78d634a0353f60f1df0d31/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:12 compute-0 podman[93458]: 2026-01-20 19:04:12.307788553 +0000 UTC m=+0.377456913 container init 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:12 compute-0 systemd[1]: Reloading.
Jan 20 19:04:12 compute-0 podman[93458]: 2026-01-20 19:04:12.314741339 +0000 UTC m=+0.384409679 container start 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:04:12 compute-0 podman[93458]: 2026-01-20 19:04:12.317927294 +0000 UTC m=+0.387595634 container attach 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 19:04:12 compute-0 systemd-rc-local-generator[93543]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:04:12 compute-0 systemd-sysv-generator[93547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:04:12 compute-0 systemd[1]: Starting Ceph rgw.rgw.compute-0.dbzrzk for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:04:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0)
Jan 20 19:04:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/330594453' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 20 19:04:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/330594453' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 19:04:12 compute-0 systemd[1]: libpod-334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5.scope: Deactivated successfully.
Jan 20 19:04:12 compute-0 podman[93624]: 2026-01-20 19:04:12.912249232 +0000 UTC m=+0.054280734 container create f7b32e8a4eacf49b2988d80d641eb016f2c8c1cdd12ab725d9b088006388cef5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-rgw-rgw-compute-0-dbzrzk, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:12 compute-0 podman[93635]: 2026-01-20 19:04:12.935632638 +0000 UTC m=+0.048152217 container died 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:04:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f0940efd8d6a05d75cf2d85c3002c41b239dfda26b78d634a0353f60f1df0d31-merged.mount: Deactivated successfully.
Jan 20 19:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fcc0afba7e1b181c020d867241bd3c1745c9e0cdf873a743c606c5da11eaf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fcc0afba7e1b181c020d867241bd3c1745c9e0cdf873a743c606c5da11eaf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fcc0afba7e1b181c020d867241bd3c1745c9e0cdf873a743c606c5da11eaf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50fcc0afba7e1b181c020d867241bd3c1745c9e0cdf873a743c606c5da11eaf9/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dbzrzk supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:12 compute-0 podman[93624]: 2026-01-20 19:04:12.886192261 +0000 UTC m=+0.028223743 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:12 compute-0 podman[93635]: 2026-01-20 19:04:12.99195477 +0000 UTC m=+0.104474319 container remove 334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5 (image=quay.io/ceph/ceph:v20, name=nifty_mcclintock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:12 compute-0 systemd[1]: libpod-conmon-334023a7c8a9dcffee9a7efd21b140c6d69131b864071238c30abdd599511bc5.scope: Deactivated successfully.
Jan 20 19:04:13 compute-0 podman[93624]: 2026-01-20 19:04:13.00371398 +0000 UTC m=+0.145745502 container init f7b32e8a4eacf49b2988d80d641eb016f2c8c1cdd12ab725d9b088006388cef5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-rgw-rgw-compute-0-dbzrzk, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:13 compute-0 podman[93624]: 2026-01-20 19:04:13.016217258 +0000 UTC m=+0.158248730 container start f7b32e8a4eacf49b2988d80d641eb016f2c8c1cdd12ab725d9b088006388cef5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-rgw-rgw-compute-0-dbzrzk, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:13 compute-0 bash[93624]: f7b32e8a4eacf49b2988d80d641eb016f2c8c1cdd12ab725d9b088006388cef5
Jan 20 19:04:13 compute-0 sudo[93437]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:13 compute-0 systemd[1]: Started Ceph rgw.rgw.compute-0.dbzrzk for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:04:13 compute-0 radosgw[93659]: deferred set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:04:13 compute-0 radosgw[93659]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process radosgw, pid 2
Jan 20 19:04:13 compute-0 radosgw[93659]: framework: beast
Jan 20 19:04:13 compute-0 radosgw[93659]: framework conf key: endpoint, val: 192.168.122.100:8082
Jan 20 19:04:13 compute-0 radosgw[93659]: init_numa not setting numa affinity
Jan 20 19:04:13 compute-0 sudo[93309]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 19:04:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 00eae6c6-6555-4af4-a1e9-816474e5931f (Updating rgw.rgw deployment (+1 -> 1))
Jan 20 19:04:13 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 00eae6c6-6555-4af4-a1e9-816474e5931f (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Jan 20 19:04:13 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Jan 20 19:04:13 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 19:04:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Jan 20 19:04:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/330594453' entity='client.admin' cmd={"prefix": "auth import"} : dispatch
Jan 20 19:04:13 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/330594453' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Jan 20 19:04:13 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:13 compute-0 sudo[93688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:04:13 compute-0 sudo[93688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:13 compute-0 sudo[93688]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:13 compute-0 sudo[93713]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:13 compute-0 sudo[93713]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:13 compute-0 sudo[93713]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:13 compute-0 sudo[93738]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:04:13 compute-0 sudo[93738]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e30 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:13 compute-0 sudo[93786]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wcfvqnfnjcqglaoexeqjlufccfzqmqhi ; /usr/bin/python3'
Jan 20 19:04:13 compute-0 sudo[93786]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:13 compute-0 python3[93795]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:13 compute-0 podman[93821]: 2026-01-20 19:04:13.787146892 +0000 UTC m=+0.045483225 container create 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:04:13 compute-0 systemd[1]: Started libpod-conmon-605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d.scope.
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Jan 20 19:04:13 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Jan 20 19:04:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Jan 20 19:04:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2810622424' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 20 19:04:13 compute-0 podman[93846]: 2026-01-20 19:04:13.858496892 +0000 UTC m=+0.073047772 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16fbe8cefb5d85cfdf3cc59a6c99e4fecf5b2698aff3a752bc4c9c956ebb59a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c16fbe8cefb5d85cfdf3cc59a6c99e4fecf5b2698aff3a752bc4c9c956ebb59a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:13 compute-0 podman[93821]: 2026-01-20 19:04:13.768550109 +0000 UTC m=+0.026886472 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:13 compute-0 podman[93821]: 2026-01-20 19:04:13.877963456 +0000 UTC m=+0.136299789 container init 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:04:13 compute-0 podman[93821]: 2026-01-20 19:04:13.88404577 +0000 UTC m=+0.142382103 container start 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:13 compute-0 podman[93821]: 2026-01-20 19:04:13.887612095 +0000 UTC m=+0.145948498 container attach 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:04:13 compute-0 podman[93846]: 2026-01-20 19:04:13.953034313 +0000 UTC m=+0.167585193 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 19:04:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 31 pg[8.0( empty local-lis/les=0/0 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:14 compute-0 ceph-mon[75120]: Saving service rgw.rgw spec with placement compute-0
Jan 20 19:04:14 compute-0 ceph-mon[75120]: pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:14 compute-0 ceph-mon[75120]: osdmap e31: 3 total, 3 up, 3 in
Jan 20 19:04:14 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2810622424' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} : dispatch
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1555469958' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 19:04:14 compute-0 sweet_hypatia[93864]: 
Jan 20 19:04:14 compute-0 sweet_hypatia[93864]: {"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":127,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":31,"num_osds":3,"num_up_osds":3,"osd_up_since":1768935829,"num_in_osds":3,"osd_in_since":1768935800,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83841024,"bytes_avail":64328085504,"bytes_total":64411926528},"fsmap":{"epoch":2,"btime":"2026-01-20T19:04:08:498809+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-01-20T19:03:35.512911+0000","services":{}},"progress_events":{}}
Jan 20 19:04:14 compute-0 systemd[1]: libpod-605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d.scope: Deactivated successfully.
Jan 20 19:04:14 compute-0 podman[93821]: 2026-01-20 19:04:14.424506204 +0000 UTC m=+0.682842547 container died 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-c16fbe8cefb5d85cfdf3cc59a6c99e4fecf5b2698aff3a752bc4c9c956ebb59a-merged.mount: Deactivated successfully.
Jan 20 19:04:14 compute-0 podman[93821]: 2026-01-20 19:04:14.468921983 +0000 UTC m=+0.727258316 container remove 605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d (image=quay.io/ceph/ceph:v20, name=sweet_hypatia, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:14 compute-0 ceph-mgr[75417]: [progress INFO root] Writing back 4 completed events
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:14 compute-0 systemd[1]: libpod-conmon-605e1f01c089522ab143f7b76d44683b844bae783cf6c5f437c31c852e3a435d.scope: Deactivated successfully.
Jan 20 19:04:14 compute-0 sudo[93786]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:14 compute-0 sudo[93738]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:14 compute-0 sudo[94073]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-okmlvzygjscxgunxwuwyjslgvxhrvywc ; /usr/bin/python3'
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:14 compute-0 sudo[94073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:14 compute-0 sudo[94076]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:14 compute-0 sudo[94076]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:14 compute-0 sudo[94076]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:14 compute-0 sudo[94101]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:04:14 compute-0 sudo[94101]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:14 compute-0 python3[94075]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:14 compute-0 podman[94126]: 2026-01-20 19:04:14.825550957 +0000 UTC m=+0.043261242 container create c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2810622424' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 19:04:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Jan 20 19:04:14 compute-0 systemd[1]: Started libpod-conmon-c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674.scope.
Jan 20 19:04:14 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Jan 20 19:04:14 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 32 pg[8.0( empty local-lis/les=31/32 n=0 ec=31/31 lis/c=0/0 les/c/f=0/0/0 sis=31) [1] r=0 lpr=31 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263fd83bea58b40a394ff632397138a1df6045fbda57ef2beda3e4a07a87fc11/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/263fd83bea58b40a394ff632397138a1df6045fbda57ef2beda3e4a07a87fc11/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:14 compute-0 podman[94126]: 2026-01-20 19:04:14.804800413 +0000 UTC m=+0.022510718 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:14 compute-0 podman[94126]: 2026-01-20 19:04:14.910581373 +0000 UTC m=+0.128291648 container init c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:14 compute-0 podman[94126]: 2026-01-20 19:04:14.926859261 +0000 UTC m=+0.144569536 container start c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:14 compute-0 podman[94126]: 2026-01-20 19:04:14.980672512 +0000 UTC m=+0.198382787 container attach c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:04:15 compute-0 podman[94719]: 2026-01-20 19:04:15.064785096 +0000 UTC m=+0.039143544 container create c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:04:15 compute-0 systemd[1]: Started libpod-conmon-c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b.scope.
Jan 20 19:04:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:15 compute-0 podman[94719]: 2026-01-20 19:04:15.138674506 +0000 UTC m=+0.113032994 container init c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:15 compute-0 podman[94719]: 2026-01-20 19:04:15.048702913 +0000 UTC m=+0.023061381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:15 compute-0 podman[94719]: 2026-01-20 19:04:15.147496646 +0000 UTC m=+0.121855114 container start c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:15 compute-0 podman[94719]: 2026-01-20 19:04:15.151928391 +0000 UTC m=+0.126286949 container attach c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:15 compute-0 xenodochial_euclid[94754]: 167 167
Jan 20 19:04:15 compute-0 systemd[1]: libpod-c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b.scope: Deactivated successfully.
Jan 20 19:04:15 compute-0 podman[94719]: 2026-01-20 19:04:15.154199486 +0000 UTC m=+0.128557944 container died c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1555469958' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:15 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2810622424' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Jan 20 19:04:15 compute-0 ceph-mon[75120]: osdmap e32: 3 total, 3 up, 3 in
Jan 20 19:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d9b72fbab088873c31668fb20ff70bd9f02e4c757de19c4bbc766de8f1277a6-merged.mount: Deactivated successfully.
Jan 20 19:04:15 compute-0 podman[94719]: 2026-01-20 19:04:15.198745237 +0000 UTC m=+0.173103685 container remove c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_euclid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:04:15 compute-0 systemd[1]: libpod-conmon-c07aa64d37d5384d5c0a01f62a9aaee1df5fc81f7266df0b12135b8a39354b4b.scope: Deactivated successfully.
Jan 20 19:04:15 compute-0 podman[94778]: 2026-01-20 19:04:15.348961235 +0000 UTC m=+0.046741004 container create dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:04:15 compute-0 systemd[1]: Started libpod-conmon-dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362.scope.
Jan 20 19:04:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Jan 20 19:04:15 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3006177493' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 20 19:04:15 compute-0 exciting_ritchie[94142]: 
Jan 20 19:04:15 compute-0 exciting_ritchie[94142]: {"epoch":1,"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","modified":"2026-01-20T19:02:02.864397Z","created":"2026-01-20T19:02:02.864397Z","min_mon_release":20,"min_mon_release_name":"tentacle","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid","tentacle"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Jan 20 19:04:15 compute-0 exciting_ritchie[94142]: dumped monmap epoch 1
Jan 20 19:04:15 compute-0 podman[94778]: 2026-01-20 19:04:15.331078809 +0000 UTC m=+0.028858598 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:15 compute-0 systemd[1]: libpod-c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674.scope: Deactivated successfully.
Jan 20 19:04:15 compute-0 podman[94126]: 2026-01-20 19:04:15.442986155 +0000 UTC m=+0.660696430 container died c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:15 compute-0 podman[94778]: 2026-01-20 19:04:15.458065954 +0000 UTC m=+0.155845743 container init dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:04:15 compute-0 podman[94778]: 2026-01-20 19:04:15.472849257 +0000 UTC m=+0.170629026 container start dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-263fd83bea58b40a394ff632397138a1df6045fbda57ef2beda3e4a07a87fc11-merged.mount: Deactivated successfully.
Jan 20 19:04:15 compute-0 podman[94778]: 2026-01-20 19:04:15.479841913 +0000 UTC m=+0.177621682 container attach dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 19:04:15 compute-0 podman[94126]: 2026-01-20 19:04:15.49483242 +0000 UTC m=+0.712542695 container remove c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674 (image=quay.io/ceph/ceph:v20, name=exciting_ritchie, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:04:15 compute-0 systemd[1]: libpod-conmon-c1917acb5b564dccaeae3de8ee9bdb788b33cf14092c9b74aa3447934d2d3674.scope: Deactivated successfully.
Jan 20 19:04:15 compute-0 sudo[94073]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v75: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Jan 20 19:04:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Jan 20 19:04:15 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Jan 20 19:04:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Jan 20 19:04:15 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 20 19:04:15 compute-0 sudo[94844]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hagfpqkerggsjqwnlcrymfchuamirspj ; /usr/bin/python3'
Jan 20 19:04:15 compute-0 sudo[94844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:15 compute-0 relaxed_gauss[94795]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:04:15 compute-0 relaxed_gauss[94795]: --> All data devices are unavailable
Jan 20 19:04:16 compute-0 systemd[1]: libpod-dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362.scope: Deactivated successfully.
Jan 20 19:04:16 compute-0 podman[94778]: 2026-01-20 19:04:16.014283643 +0000 UTC m=+0.712063412 container died dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:16 compute-0 python3[94847]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-86c04d26d2553404f3d26572c98759f16c85ec62e948f33d20aeff5e0309be23-merged.mount: Deactivated successfully.
Jan 20 19:04:16 compute-0 podman[94778]: 2026-01-20 19:04:16.090286274 +0000 UTC m=+0.788066043 container remove dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=relaxed_gauss, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:16 compute-0 systemd[1]: libpod-conmon-dc6e6b9992ff7e08f50b5029a16f0c26330d6aca74d94770bdf1002ac1cc8362.scope: Deactivated successfully.
Jan 20 19:04:16 compute-0 podman[94860]: 2026-01-20 19:04:16.123446764 +0000 UTC m=+0.067902808 container create d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:16 compute-0 sudo[94101]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:16 compute-0 systemd[1]: Started libpod-conmon-d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff.scope.
Jan 20 19:04:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd32466a20e556bb07eafbea159f3d77fb0a2decd8fe4da9edc9cb2061c65ec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ddd32466a20e556bb07eafbea159f3d77fb0a2decd8fe4da9edc9cb2061c65ec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:16 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3006177493' entity='client.admin' cmd={"prefix": "mon dump", "format": "json"} : dispatch
Jan 20 19:04:16 compute-0 ceph-mon[75120]: pgmap v75: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:16 compute-0 ceph-mon[75120]: osdmap e33: 3 total, 3 up, 3 in
Jan 20 19:04:16 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} : dispatch
Jan 20 19:04:16 compute-0 sudo[94880]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:16 compute-0 sudo[94880]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:16 compute-0 sudo[94880]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:16 compute-0 podman[94860]: 2026-01-20 19:04:16.104130563 +0000 UTC m=+0.048586627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:16 compute-0 podman[94860]: 2026-01-20 19:04:16.198028631 +0000 UTC m=+0.142484695 container init d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:16 compute-0 podman[94860]: 2026-01-20 19:04:16.212978666 +0000 UTC m=+0.157434700 container start d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:16 compute-0 podman[94860]: 2026-01-20 19:04:16.216278326 +0000 UTC m=+0.160734370 container attach d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:16 compute-0 sudo[94909]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:04:16 compute-0 sudo[94909]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:16 compute-0 podman[94965]: 2026-01-20 19:04:16.561451347 +0000 UTC m=+0.051316373 container create c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:16 compute-0 systemd[1]: Started libpod-conmon-c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf.scope.
Jan 20 19:04:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:16 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 33 pg[9.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:16 compute-0 podman[94965]: 2026-01-20 19:04:16.62241119 +0000 UTC m=+0.112276236 container init c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:16 compute-0 podman[94965]: 2026-01-20 19:04:16.627642344 +0000 UTC m=+0.117507370 container start c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:16 compute-0 podman[94965]: 2026-01-20 19:04:16.630281767 +0000 UTC m=+0.120146803 container attach c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 19:04:16 compute-0 thirsty_sanderson[94981]: 167 167
Jan 20 19:04:16 compute-0 systemd[1]: libpod-c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf.scope: Deactivated successfully.
Jan 20 19:04:16 compute-0 podman[94965]: 2026-01-20 19:04:16.632460339 +0000 UTC m=+0.122325365 container died c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:04:16 compute-0 podman[94965]: 2026-01-20 19:04:16.545566539 +0000 UTC m=+0.035431585 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-559c91c3cf4130e0629ea69349f70f049ed3ed0b12ee9ee21b3ce41f4e54452a-merged.mount: Deactivated successfully.
Jan 20 19:04:16 compute-0 podman[94965]: 2026-01-20 19:04:16.668789364 +0000 UTC m=+0.158654390 container remove c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sanderson, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:16 compute-0 systemd[1]: libpod-conmon-c2f01c34ee8c6c9b2b16bf6260c3d9dd671b5264d4fab9d230397fa7962ac4bf.scope: Deactivated successfully.
Jan 20 19:04:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Jan 20 19:04:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1968822085' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 20 19:04:16 compute-0 youthful_fermat[94888]: [client.openstack]
Jan 20 19:04:16 compute-0 youthful_fermat[94888]:         key = AQD40G9pAAAAABAAnCl2JBwdjyAhlZdo4nlc0A==
Jan 20 19:04:16 compute-0 youthful_fermat[94888]:         caps mgr = "allow *"
Jan 20 19:04:16 compute-0 youthful_fermat[94888]:         caps mon = "profile rbd"
Jan 20 19:04:16 compute-0 youthful_fermat[94888]:         caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Jan 20 19:04:16 compute-0 systemd[1]: libpod-d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff.scope: Deactivated successfully.
Jan 20 19:04:16 compute-0 podman[94860]: 2026-01-20 19:04:16.735845722 +0000 UTC m=+0.680301806 container died d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:04:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddd32466a20e556bb07eafbea159f3d77fb0a2decd8fe4da9edc9cb2061c65ec-merged.mount: Deactivated successfully.
Jan 20 19:04:16 compute-0 podman[94860]: 2026-01-20 19:04:16.7848662 +0000 UTC m=+0.729322244 container remove d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff (image=quay.io/ceph/ceph:v20, name=youthful_fermat, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:16 compute-0 systemd[1]: libpod-conmon-d2367d2bec28ebba58bdfea36a0961f0b34d4d7295b67722ccbb7d3c088f10ff.scope: Deactivated successfully.
Jan 20 19:04:16 compute-0 sudo[94844]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:16 compute-0 podman[95019]: 2026-01-20 19:04:16.835432384 +0000 UTC m=+0.045141806 container create 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 19:04:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Jan 20 19:04:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 19:04:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Jan 20 19:04:16 compute-0 systemd[1]: Started libpod-conmon-7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e.scope.
Jan 20 19:04:16 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Jan 20 19:04:16 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 34 pg[9.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:16 compute-0 podman[95019]: 2026-01-20 19:04:16.816194175 +0000 UTC m=+0.025903617 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:16 compute-0 podman[95019]: 2026-01-20 19:04:16.912113991 +0000 UTC m=+0.121823433 container init 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 19:04:16 compute-0 podman[95019]: 2026-01-20 19:04:16.919465936 +0000 UTC m=+0.129175358 container start 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:04:16 compute-0 podman[95019]: 2026-01-20 19:04:16.922730133 +0000 UTC m=+0.132439555 container attach 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:04:17 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1968822085' entity='client.admin' cmd={"prefix": "auth get", "entity": "client.openstack"} : dispatch
Jan 20 19:04:17 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Jan 20 19:04:17 compute-0 ceph-mon[75120]: osdmap e34: 3 total, 3 up, 3 in
Jan 20 19:04:17 compute-0 naughty_wiles[95035]: {
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:     "0": [
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:         {
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "devices": [
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "/dev/loop3"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             ],
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_name": "ceph_lv0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_size": "21470642176",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "name": "ceph_lv0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "tags": {
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.crush_device_class": "",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.encrypted": "0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osd_id": "0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.type": "block",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.vdo": "0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.with_tpm": "0"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             },
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "type": "block",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "vg_name": "ceph_vg0"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:         }
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:     ],
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:     "1": [
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:         {
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "devices": [
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "/dev/loop4"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             ],
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_name": "ceph_lv1",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_size": "21470642176",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "name": "ceph_lv1",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "tags": {
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.crush_device_class": "",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.encrypted": "0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osd_id": "1",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.type": "block",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.vdo": "0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.with_tpm": "0"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             },
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "type": "block",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "vg_name": "ceph_vg1"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:         }
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:     ],
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:     "2": [
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:         {
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "devices": [
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "/dev/loop5"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             ],
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_name": "ceph_lv2",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_size": "21470642176",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "name": "ceph_lv2",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "tags": {
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.crush_device_class": "",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.encrypted": "0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osd_id": "2",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.type": "block",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.vdo": "0",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:                 "ceph.with_tpm": "0"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             },
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "type": "block",
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:             "vg_name": "ceph_vg2"
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:         }
Jan 20 19:04:17 compute-0 naughty_wiles[95035]:     ]
Jan 20 19:04:17 compute-0 naughty_wiles[95035]: }
Jan 20 19:04:17 compute-0 systemd[1]: libpod-7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e.scope: Deactivated successfully.
Jan 20 19:04:17 compute-0 podman[95019]: 2026-01-20 19:04:17.298947065 +0000 UTC m=+0.508656487 container died 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 19:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-4fe080296db9f9d55a15b7d5dcbd3a556b1d1910bf88d76295f583830253e97a-merged.mount: Deactivated successfully.
Jan 20 19:04:17 compute-0 podman[95019]: 2026-01-20 19:04:17.354418776 +0000 UTC m=+0.564128198 container remove 7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:17 compute-0 systemd[1]: libpod-conmon-7467225783bdf9ef9edafd517cccb85cbc8dd84cd63f3e277d369b96fdcd5e1e.scope: Deactivated successfully.
Jan 20 19:04:17 compute-0 sudo[94909]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:17 compute-0 sudo[95061]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:17 compute-0 sudo[95061]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:17 compute-0 sudo[95061]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v78: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:17 compute-0 sudo[95086]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:04:17 compute-0 sudo[95086]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:17 compute-0 podman[95146]: 2026-01-20 19:04:17.836049679 +0000 UTC m=+0.042712758 container create 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 19:04:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Jan 20 19:04:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Jan 20 19:04:17 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Jan 20 19:04:17 compute-0 systemd[1]: Started libpod-conmon-97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def.scope.
Jan 20 19:04:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Jan 20 19:04:17 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 20 19:04:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 35 pg[10.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:17 compute-0 podman[95146]: 2026-01-20 19:04:17.818401519 +0000 UTC m=+0.025064618 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:17 compute-0 podman[95146]: 2026-01-20 19:04:17.92299085 +0000 UTC m=+0.129653959 container init 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 19:04:17 compute-0 podman[95146]: 2026-01-20 19:04:17.929746671 +0000 UTC m=+0.136409750 container start 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:17 compute-0 podman[95146]: 2026-01-20 19:04:17.93307376 +0000 UTC m=+0.139736839 container attach 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:17 compute-0 bold_perlman[95191]: 167 167
Jan 20 19:04:17 compute-0 systemd[1]: libpod-97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def.scope: Deactivated successfully.
Jan 20 19:04:17 compute-0 podman[95146]: 2026-01-20 19:04:17.936663325 +0000 UTC m=+0.143326424 container died 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:04:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e083229f607171989f7913189abe35db9bdc8e2bc12f8e9f4b32ee5fc5be7393-merged.mount: Deactivated successfully.
Jan 20 19:04:17 compute-0 podman[95146]: 2026-01-20 19:04:17.975138292 +0000 UTC m=+0.181801371 container remove 97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 19:04:17 compute-0 systemd[1]: libpod-conmon-97af3c374199a3672174b2c659df716699e709c8e0c2a78d90170ce8fcdd2def.scope: Deactivated successfully.
Jan 20 19:04:18 compute-0 sudo[95316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cscjfzwxwogaztwagcklhvpmoqyjbnew ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768935857.7789674-36768-47467053585400/async_wrapper.py j778018551134 30 /home/zuul/.ansible/tmp/ansible-tmp-1768935857.7789674-36768-47467053585400/AnsiballZ_command.py _'
Jan 20 19:04:18 compute-0 sudo[95316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:18 compute-0 podman[95287]: 2026-01-20 19:04:18.173249621 +0000 UTC m=+0.070742826 container create 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:04:18 compute-0 ceph-mon[75120]: pgmap v78: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:18 compute-0 ceph-mon[75120]: osdmap e35: 3 total, 3 up, 3 in
Jan 20 19:04:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} : dispatch
Jan 20 19:04:18 compute-0 systemd[1]: Started libpod-conmon-21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093.scope.
Jan 20 19:04:18 compute-0 podman[95287]: 2026-01-20 19:04:18.139959919 +0000 UTC m=+0.037453194 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:18 compute-0 ansible-async_wrapper.py[95323]: Invoked with j778018551134 30 /home/zuul/.ansible/tmp/ansible-tmp-1768935857.7789674-36768-47467053585400/AnsiballZ_command.py _
Jan 20 19:04:18 compute-0 podman[95287]: 2026-01-20 19:04:18.273199482 +0000 UTC m=+0.170692687 container init 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:18 compute-0 ansible-async_wrapper.py[95333]: Starting module and watcher
Jan 20 19:04:18 compute-0 ansible-async_wrapper.py[95333]: Start watching 95334 (30)
Jan 20 19:04:18 compute-0 ansible-async_wrapper.py[95334]: Start module (95334)
Jan 20 19:04:18 compute-0 podman[95287]: 2026-01-20 19:04:18.281315265 +0000 UTC m=+0.178808450 container start 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 19:04:18 compute-0 ansible-async_wrapper.py[95323]: Return async_wrapper task started.
Jan 20 19:04:18 compute-0 podman[95287]: 2026-01-20 19:04:18.28948653 +0000 UTC m=+0.186979745 container attach 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 19:04:18 compute-0 sudo[95316]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:18 compute-0 python3[95336]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:18 compute-0 podman[95337]: 2026-01-20 19:04:18.524475098 +0000 UTC m=+0.064888187 container create 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 19:04:18 compute-0 systemd[1]: Started libpod-conmon-80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64.scope.
Jan 20 19:04:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7833becc9b7e21b0c63ed8d61771434093034015654c31b10c8db0a56443adad/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7833becc9b7e21b0c63ed8d61771434093034015654c31b10c8db0a56443adad/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:18 compute-0 podman[95337]: 2026-01-20 19:04:18.593775109 +0000 UTC m=+0.134188228 container init 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:18 compute-0 podman[95337]: 2026-01-20 19:04:18.503266712 +0000 UTC m=+0.043679841 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:18 compute-0 podman[95337]: 2026-01-20 19:04:18.599180587 +0000 UTC m=+0.139593676 container start 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:04:18 compute-0 podman[95337]: 2026-01-20 19:04:18.603329175 +0000 UTC m=+0.143742284 container attach 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Jan 20 19:04:18 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 19:04:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Jan 20 19:04:18 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Jan 20 19:04:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 36 pg[10.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [2] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:19 compute-0 lvm[95451]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:04:19 compute-0 lvm[95451]: VG ceph_vg1 finished
Jan 20 19:04:19 compute-0 lvm[95448]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:04:19 compute-0 lvm[95448]: VG ceph_vg0 finished
Jan 20 19:04:19 compute-0 lvm[95453]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:04:19 compute-0 lvm[95453]: VG ceph_vg2 finished
Jan 20 19:04:19 compute-0 lvm[95454]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:04:19 compute-0 lvm[95454]: VG ceph_vg1 finished
Jan 20 19:04:19 compute-0 lvm[95455]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:04:19 compute-0 lvm[95455]: VG ceph_vg0 finished
Jan 20 19:04:19 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:04:19 compute-0 lvm[95456]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:04:19 compute-0 lvm[95456]: VG ceph_vg1 finished
Jan 20 19:04:19 compute-0 lucid_colden[95362]: 
Jan 20 19:04:19 compute-0 lucid_colden[95362]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 19:04:19 compute-0 systemd[1]: libpod-80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64.scope: Deactivated successfully.
Jan 20 19:04:19 compute-0 conmon[95362]: conmon 80d324d76f8d69626357 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64.scope/container/memory.events
Jan 20 19:04:19 compute-0 podman[95337]: 2026-01-20 19:04:19.146548325 +0000 UTC m=+0.686961434 container died 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030)
Jan 20 19:04:19 compute-0 lucid_sinoussi[95327]: {}
Jan 20 19:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-7833becc9b7e21b0c63ed8d61771434093034015654c31b10c8db0a56443adad-merged.mount: Deactivated successfully.
Jan 20 19:04:19 compute-0 podman[95337]: 2026-01-20 19:04:19.203974923 +0000 UTC m=+0.744388012 container remove 80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64 (image=quay.io/ceph/ceph:v20, name=lucid_colden, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:04:19 compute-0 systemd[1]: libpod-conmon-80d324d76f8d69626357c1198c43dc85e9e6bb8544ff6ba8c09c7e21c0878f64.scope: Deactivated successfully.
Jan 20 19:04:19 compute-0 systemd[1]: libpod-21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093.scope: Deactivated successfully.
Jan 20 19:04:19 compute-0 systemd[1]: libpod-21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093.scope: Consumed 1.445s CPU time.
Jan 20 19:04:19 compute-0 podman[95287]: 2026-01-20 19:04:19.214207258 +0000 UTC m=+1.111700443 container died 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:19 compute-0 ansible-async_wrapper.py[95334]: Module complete (95334)
Jan 20 19:04:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-5bee4aade414cba5034b7bae17b67e8833ccfccdd5802d20bb817f2694156f1b-merged.mount: Deactivated successfully.
Jan 20 19:04:19 compute-0 podman[95287]: 2026-01-20 19:04:19.250889821 +0000 UTC m=+1.148383006 container remove 21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_sinoussi, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:19 compute-0 systemd[1]: libpod-conmon-21854469a579db18fd5e8c3ea06759a7ed6dca902c9abc29583b076b64ddb093.scope: Deactivated successfully.
Jan 20 19:04:19 compute-0 sudo[95086]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:19 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:19 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:19 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 3c87c65b-3318-4cc1-94df-8e8d07df483e (Updating mds.cephfs deployment (+1 -> 1))
Jan 20 19:04:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Jan 20 19:04:19 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 20 19:04:19 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 19:04:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:19 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:19 compute-0 ceph-mgr[75417]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.djcctc on compute-0
Jan 20 19:04:19 compute-0 ceph-mgr[75417]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.djcctc on compute-0
Jan 20 19:04:19 compute-0 sudo[95485]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:19 compute-0 sudo[95485]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:19 compute-0 sudo[95485]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:19 compute-0 sudo[95510]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 _orch deploy --fsid 90fff835-31df-513f-a409-b6642f04e6ac
Jan 20 19:04:19 compute-0 sudo[95510]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:19 compute-0 ceph-mgr[75417]: [progress WARNING root] Starting Global Recovery Event,3 pgs not in active + clean state
Jan 20 19:04:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v81: 10 pgs: 1 unknown, 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Jan 20 19:04:19 compute-0 sudo[95594]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fucbpnidgrullrtbcmzhpebvskcycnjo ; /usr/bin/python3'
Jan 20 19:04:19 compute-0 sudo[95594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:19 compute-0 python3[95599]: ansible-ansible.legacy.async_status Invoked with jid=j778018551134.95323 mode=status _async_dir=/root/.ansible_async
Jan 20 19:04:19 compute-0 sudo[95594]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Jan 20 19:04:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Jan 20 19:04:19 compute-0 podman[95623]: 2026-01-20 19:04:19.925426729 +0000 UTC m=+0.091765647 container create d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:19 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Jan 20 19:04:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Jan 20 19:04:19 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 20 19:04:19 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Jan 20 19:04:19 compute-0 ceph-mon[75120]: osdmap e36: 3 total, 3 up, 3 in
Jan 20 19:04:19 compute-0 ceph-mon[75120]: from='client.14256 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:04:19 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:19 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:19 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} : dispatch
Jan 20 19:04:19 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.djcctc", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Jan 20 19:04:19 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:19 compute-0 ceph-mon[75120]: Deploying daemon mds.cephfs.compute-0.djcctc on compute-0
Jan 20 19:04:19 compute-0 ceph-mon[75120]: pgmap v81: 10 pgs: 1 unknown, 9 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Jan 20 19:04:19 compute-0 podman[95623]: 2026-01-20 19:04:19.86498362 +0000 UTC m=+0.031322588 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:19 compute-0 systemd[1]: Started libpod-conmon-d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9.scope.
Jan 20 19:04:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:20 compute-0 sudo[95690]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tssobkdefznrcpqzskqxbhgddtyuuide ; /usr/bin/python3'
Jan 20 19:04:20 compute-0 sudo[95690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:20 compute-0 podman[95623]: 2026-01-20 19:04:20.055752924 +0000 UTC m=+0.222091892 container init d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:20 compute-0 podman[95623]: 2026-01-20 19:04:20.065743222 +0000 UTC m=+0.232082120 container start d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:20 compute-0 loving_pike[95664]: 167 167
Jan 20 19:04:20 compute-0 systemd[1]: libpod-d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9.scope: Deactivated successfully.
Jan 20 19:04:20 compute-0 podman[95623]: 2026-01-20 19:04:20.098882931 +0000 UTC m=+0.265221829 container attach d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 19:04:20 compute-0 podman[95623]: 2026-01-20 19:04:20.099785762 +0000 UTC m=+0.266124640 container died d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Jan 20 19:04:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-47a3e654e355d3ceb78f6bdc32eb8f86922d34d08236018346899dbffcf2ed48-merged.mount: Deactivated successfully.
Jan 20 19:04:20 compute-0 podman[95623]: 2026-01-20 19:04:20.167911885 +0000 UTC m=+0.334250763 container remove d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:20 compute-0 systemd[1]: libpod-conmon-d3f4d4b767d6bfb50ec1312926a60da26e1c5f6ec2862c9bfecec1b891673dd9.scope: Deactivated successfully.
Jan 20 19:04:20 compute-0 python3[95692]: ansible-ansible.legacy.async_status Invoked with jid=j778018551134.95323 mode=cleanup _async_dir=/root/.ansible_async
Jan 20 19:04:20 compute-0 sudo[95690]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:20 compute-0 systemd[1]: Reloading.
Jan 20 19:04:20 compute-0 systemd-sysv-generator[95741]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:04:20 compute-0 systemd-rc-local-generator[95737]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:04:20 compute-0 systemd[1]: Reloading.
Jan 20 19:04:20 compute-0 systemd-rc-local-generator[95779]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:04:20 compute-0 systemd-sysv-generator[95784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:04:20 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 37 pg[11.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:20 compute-0 sudo[95807]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fdpbgmodppcfoqacvhszlfglmkgmaqob ; /usr/bin/python3'
Jan 20 19:04:20 compute-0 sudo[95807]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:20 compute-0 systemd[1]: Starting Ceph mds.cephfs.compute-0.djcctc for 90fff835-31df-513f-a409-b6642f04e6ac...
Jan 20 19:04:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Jan 20 19:04:20 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 19:04:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Jan 20 19:04:20 compute-0 python3[95811]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:20 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Jan 20 19:04:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Jan 20 19:04:20 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 20 19:04:20 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 38 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [1] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:20 compute-0 ceph-mon[75120]: osdmap e37: 3 total, 3 up, 3 in
Jan 20 19:04:20 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} : dispatch
Jan 20 19:04:20 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Jan 20 19:04:20 compute-0 ceph-mon[75120]: osdmap e38: 3 total, 3 up, 3 in
Jan 20 19:04:20 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} : dispatch
Jan 20 19:04:21 compute-0 podman[95849]: 2026-01-20 19:04:21.002103996 +0000 UTC m=+0.047349099 container create b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True)
Jan 20 19:04:21 compute-0 podman[95869]: 2026-01-20 19:04:21.032680405 +0000 UTC m=+0.053965017 container create 83d8b470dcb94ce86655b877af0d38a9040c6f7ce293453f291fdc0baa3bb4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 19:04:21 compute-0 systemd[1]: Started libpod-conmon-b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2.scope.
Jan 20 19:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf28c6e0124c4886c4fbadcb9df0193d482b3f7182325662708d5bbf7a3416d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf28c6e0124c4886c4fbadcb9df0193d482b3f7182325662708d5bbf7a3416d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf28c6e0124c4886c4fbadcb9df0193d482b3f7182325662708d5bbf7a3416d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf28c6e0124c4886c4fbadcb9df0193d482b3f7182325662708d5bbf7a3416d1/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.djcctc supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:21 compute-0 podman[95869]: 2026-01-20 19:04:21.075700919 +0000 UTC m=+0.096985561 container init 83d8b470dcb94ce86655b877af0d38a9040c6f7ce293453f291fdc0baa3bb4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:04:21 compute-0 podman[95849]: 2026-01-20 19:04:20.982055268 +0000 UTC m=+0.027300381 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557126ff124fb90265e6870582a66e43b82e92d20186925207f6e5e2139cd119/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:21 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/557126ff124fb90265e6870582a66e43b82e92d20186925207f6e5e2139cd119/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:21 compute-0 podman[95869]: 2026-01-20 19:04:21.084860058 +0000 UTC m=+0.106144680 container start 83d8b470dcb94ce86655b877af0d38a9040c6f7ce293453f291fdc0baa3bb4fc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:21 compute-0 bash[95869]: 83d8b470dcb94ce86655b877af0d38a9040c6f7ce293453f291fdc0baa3bb4fc
Jan 20 19:04:21 compute-0 podman[95869]: 2026-01-20 19:04:21.0140457 +0000 UTC m=+0.035330342 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:21 compute-0 podman[95849]: 2026-01-20 19:04:21.09460004 +0000 UTC m=+0.139845153 container init b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:21 compute-0 systemd[1]: Started Ceph mds.cephfs.compute-0.djcctc for 90fff835-31df-513f-a409-b6642f04e6ac.
Jan 20 19:04:21 compute-0 podman[95849]: 2026-01-20 19:04:21.102556148 +0000 UTC m=+0.147801241 container start b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:04:21 compute-0 podman[95849]: 2026-01-20 19:04:21.105896749 +0000 UTC m=+0.151141842 container attach b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:04:21 compute-0 ceph-mds[95894]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:04:21 compute-0 ceph-mds[95894]: ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo), process ceph-mds, pid 2
Jan 20 19:04:21 compute-0 ceph-mds[95894]: main not setting numa affinity
Jan 20 19:04:21 compute-0 ceph-mds[95894]: pidfile_write: ignore empty --pid-file
Jan 20 19:04:21 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc[95889]: starting mds.cephfs.compute-0.djcctc at 
Jan 20 19:04:21 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Updating MDS map to version 2 from mon.0
Jan 20 19:04:21 compute-0 sudo[95510]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 19:04:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:21 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 3c87c65b-3318-4cc1-94df-8e8d07df483e (Updating mds.cephfs deployment (+1 -> 1))
Jan 20 19:04:21 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 3c87c65b-3318-4cc1-94df-8e8d07df483e (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Jan 20 19:04:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Jan 20 19:04:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Jan 20 19:04:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:21 compute-0 sudo[95915]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:04:21 compute-0 sudo[95915]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:21 compute-0 sudo[95915]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:21 compute-0 sudo[95958]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:21 compute-0 sudo[95958]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:21 compute-0 sudo[95958]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:21 compute-0 sudo[95983]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:04:21 compute-0 sudo[95983]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v84: 11 pgs: 1 unknown, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Jan 20 19:04:21 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Jan 20 19:04:22 compute-0 infallible_cerf[95887]: 
Jan 20 19:04:22 compute-0 infallible_cerf[95887]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e3 new map
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e3 print_map
                                           e3
                                           btime 2026-01-20T19:04:22:675421+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        2
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T19:04:08.498557+0000
                                           modified        2026-01-20T19:04:08.498557+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        
                                           up        {}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                            
                                            
                                           Standby daemons:
                                            
                                           [mds.cephfs.compute-0.djcctc{-1:14258} state up:standby seq 1 addr [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] compat {c=[1],r=[1],i=[1fff]}]
Jan 20 19:04:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:22 compute-0 ceph-mon[75120]: pgmap v84: 11 pgs: 1 unknown, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 2.2 KiB/s wr, 4 op/s
Jan 20 19:04:22 compute-0 ceph-mon[75120]: from='client.14260 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Updating MDS map to version 3 from mon.0
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Monitors have assigned me to become a standby
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] up:boot
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] as mds.0
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.djcctc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.djcctc"} v 0)
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.djcctc"} : dispatch
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e3 all = 0
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e4 new map
Jan 20 19:04:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e4 print_map
                                           e4
                                           btime 2026-01-20T19:04:22:696302+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        4
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T19:04:08.498557+0000
                                           modified        2026-01-20T19:04:22.696296+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14258}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 0 members: 
                                           [mds.cephfs.compute-0.djcctc{0:14258} state up:creating seq 1 addr [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 20 19:04:22 compute-0 systemd[1]: libpod-b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2.scope: Deactivated successfully.
Jan 20 19:04:22 compute-0 podman[95849]: 2026-01-20 19:04:22.702706945 +0000 UTC m=+1.747952038 container died b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Updating MDS map to version 4 from mon.0
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.djcctc=up:creating}
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x1
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x100
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x600
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x601
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x602
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x603
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x604
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x605
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x606
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x607
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x608
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.cache creating system inode with ino:0x609
Jan 20 19:04:22 compute-0 ceph-mds[95894]: mds.0.4 creating_done
Jan 20 19:04:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.djcctc is now active in filesystem cephfs as rank 0
Jan 20 19:04:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-557126ff124fb90265e6870582a66e43b82e92d20186925207f6e5e2139cd119-merged.mount: Deactivated successfully.
Jan 20 19:04:22 compute-0 podman[95849]: 2026-01-20 19:04:22.770885669 +0000 UTC m=+1.816130762 container remove b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2 (image=quay.io/ceph/ceph:v20, name=infallible_cerf, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:04:22 compute-0 systemd[1]: libpod-conmon-b7fb0fab3bb9030e5103a5c9e6dc0f01767b6c8a25d7f72ae080aaba3101d1d2.scope: Deactivated successfully.
Jan 20 19:04:22 compute-0 sudo[95807]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:22 compute-0 podman[96093]: 2026-01-20 19:04:22.886484143 +0000 UTC m=+0.057291705 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 19:04:22 compute-0 radosgw[93659]: v1 topic migration: starting v1 topic migration..
Jan 20 19:04:22 compute-0 radosgw[93659]: v1 topic migration: finished v1 topic migration
Jan 20 19:04:22 compute-0 radosgw[93659]: framework: beast
Jan 20 19:04:22 compute-0 radosgw[93659]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Jan 20 19:04:22 compute-0 radosgw[93659]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Jan 20 19:04:22 compute-0 podman[96093]: 2026-01-20 19:04:22.989728642 +0000 UTC m=+0.160536214 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 19:04:22 compute-0 radosgw[93659]: starting handler: beast
Jan 20 19:04:23 compute-0 radosgw[93659]: set uid:gid to 167:167 (ceph:ceph)
Jan 20 19:04:23 compute-0 radosgw[93659]: mgrc service_daemon_register rgw.14250 metadata {arch=x86_64,ceph_release=tentacle,ceph_version=ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo),ceph_version_short=20.2.0,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dbzrzk,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026,kernel_version=5.14.0-661.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864312,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=6427199d-52d7-4810-99bf-ec966a7007f4,zone_name=default,zonegroup_id=7f3fa8c0-913b-4a23-89e0-2cf7070dd47e,zonegroup_name=default}
Jan 20 19:04:23 compute-0 ansible-async_wrapper.py[95333]: Done in kid B.
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:23 compute-0 sudo[96253]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjihrxczvmfaxxsqgwbeysdynhsabqzc ; /usr/bin/python3'
Jan 20 19:04:23 compute-0 sudo[96253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v86: 11 pgs: 1 unknown, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.9 KiB/s wr, 4 op/s
Jan 20 19:04:23 compute-0 python3[96265]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:23 compute-0 podman[96292]: 2026-01-20 19:04:23.687879593 +0000 UTC m=+0.047700918 container create 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430692269' entity='client.rgw.rgw.compute-0.dbzrzk' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Jan 20 19:04:23 compute-0 ceph-mon[75120]: osdmap e39: 3 total, 3 up, 3 in
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mds.? [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] up:boot
Jan 20 19:04:23 compute-0 ceph-mon[75120]: daemon mds.cephfs.compute-0.djcctc assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: Cluster is now healthy
Jan 20 19:04:23 compute-0 ceph-mon[75120]: fsmap cephfs:0 1 up:standby
Jan 20 19:04:23 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "mds metadata", "who": "cephfs.compute-0.djcctc"} : dispatch
Jan 20 19:04:23 compute-0 ceph-mon[75120]: fsmap cephfs:1 {0=cephfs.compute-0.djcctc=up:creating}
Jan 20 19:04:23 compute-0 ceph-mon[75120]: daemon mds.cephfs.compute-0.djcctc is now active in filesystem cephfs as rank 0
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e5 new map
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).mds e5 print_map
                                           e5
                                           btime 2026-01-20T19:04:23:700833+0000
                                           enable_multiple, ever_enabled_multiple: 1,1
                                           default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           legacy client fscid: 1
                                            
                                           Filesystem 'cephfs' (1)
                                           fs_name        cephfs
                                           epoch        5
                                           flags        12 joinable allow_snaps allow_multimds_snaps
                                           created        2026-01-20T19:04:08.498557+0000
                                           modified        2026-01-20T19:04:23.700829+0000
                                           tableserver        0
                                           root        0
                                           session_timeout        60
                                           session_autoclose        300
                                           max_file_size        1099511627776
                                           max_xattr_size        65536
                                           required_client_features        {}
                                           last_failure        0
                                           last_failure_osd_epoch        0
                                           compat        compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}
                                           max_mds        1
                                           in        0
                                           up        {0=14258}
                                           failed        
                                           damaged        
                                           stopped        
                                           data_pools        [7]
                                           metadata_pool        6
                                           inline_data        disabled
                                           balancer        
                                           bal_rank_mask        -1
                                           standby_count_wanted        0
                                           qdb_cluster        leader: 14258 members: 14258
                                           [mds.cephfs.compute-0.djcctc{0:14258} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] compat {c=[1],r=[1],i=[1fff]}]
                                            
                                            
Jan 20 19:04:23 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc Updating MDS map to version 5 from mon.0
Jan 20 19:04:23 compute-0 ceph-mds[95894]: mds.0.4 handle_mds_map I am now mds.0.4
Jan 20 19:04:23 compute-0 ceph-mds[95894]: mds.0.4 handle_mds_map state change up:creating --> up:active
Jan 20 19:04:23 compute-0 ceph-mds[95894]: mds.0.4 recovery_done -- successful recovery!
Jan 20 19:04:23 compute-0 ceph-mds[95894]: mds.0.4 active_start
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] up:active
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.djcctc=up:active}
Jan 20 19:04:23 compute-0 systemd[1]: Started libpod-conmon-3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018.scope.
Jan 20 19:04:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:23 compute-0 podman[96292]: 2026-01-20 19:04:23.663503051 +0000 UTC m=+0.023324396 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4ca42337ef0a7361282851363ba241c0bdb1d9655654f8e13784af33459412/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad4ca42337ef0a7361282851363ba241c0bdb1d9655654f8e13784af33459412/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:23 compute-0 podman[96292]: 2026-01-20 19:04:23.779924626 +0000 UTC m=+0.139745951 container init 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:04:23 compute-0 podman[96292]: 2026-01-20 19:04:23.78600973 +0000 UTC m=+0.145831055 container start 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:23 compute-0 podman[96292]: 2026-01-20 19:04:23.789303178 +0000 UTC m=+0.149124503 container attach 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:23 compute-0 sudo[95983]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:04:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:23 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:23 compute-0 sudo[96344]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:23 compute-0 sudo[96344]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:23 compute-0 sudo[96344]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:23 compute-0 sudo[96388]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:04:23 compute-0 sudo[96388]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:24 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:04:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} v 0)
Jan 20 19:04:24 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:04:24 compute-0 goofy_hypatia[96329]: 
Jan 20 19:04:24 compute-0 goofy_hypatia[96329]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_exit_timeout_secs": 120, "rgw_frontend_port": 8082}}]
Jan 20 19:04:24 compute-0 systemd[1]: libpod-3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018.scope: Deactivated successfully.
Jan 20 19:04:24 compute-0 podman[96292]: 2026-01-20 19:04:24.232129357 +0000 UTC m=+0.591950672 container died 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad4ca42337ef0a7361282851363ba241c0bdb1d9655654f8e13784af33459412-merged.mount: Deactivated successfully.
Jan 20 19:04:24 compute-0 podman[96292]: 2026-01-20 19:04:24.273771938 +0000 UTC m=+0.633593263 container remove 3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018 (image=quay.io/ceph/ceph:v20, name=goofy_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:04:24 compute-0 systemd[1]: libpod-conmon-3f207fe819a84913f0c93b78c95a4ea91ca796233670a279f44eb87b12b66018.scope: Deactivated successfully.
Jan 20 19:04:24 compute-0 podman[96428]: 2026-01-20 19:04:24.292489365 +0000 UTC m=+0.050101415 container create cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:24 compute-0 sudo[96253]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:24 compute-0 systemd[1]: Started libpod-conmon-cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb.scope.
Jan 20 19:04:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:24 compute-0 podman[96428]: 2026-01-20 19:04:24.358771594 +0000 UTC m=+0.116383654 container init cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:24 compute-0 podman[96428]: 2026-01-20 19:04:24.365486334 +0000 UTC m=+0.123098384 container start cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:04:24 compute-0 trusting_kapitsa[96456]: 167 167
Jan 20 19:04:24 compute-0 podman[96428]: 2026-01-20 19:04:24.272597541 +0000 UTC m=+0.030209611 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:24 compute-0 systemd[1]: libpod-cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb.scope: Deactivated successfully.
Jan 20 19:04:24 compute-0 podman[96428]: 2026-01-20 19:04:24.371137798 +0000 UTC m=+0.128749898 container attach cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:04:24 compute-0 podman[96428]: 2026-01-20 19:04:24.372543592 +0000 UTC m=+0.130155672 container died cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:04:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-2a9bb0019e6831657ddca07dfc901de3ed652b5e8d4c673d6268a58a1acb2764-merged.mount: Deactivated successfully.
Jan 20 19:04:24 compute-0 podman[96428]: 2026-01-20 19:04:24.409601064 +0000 UTC m=+0.167213114 container remove cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_kapitsa, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:04:24 compute-0 systemd[1]: libpod-conmon-cd8bb78f2eb06d7292669195202c40267085c186ed629259372598e63c6ab3fb.scope: Deactivated successfully.
Jan 20 19:04:24 compute-0 ceph-mgr[75417]: [progress INFO root] Writing back 5 completed events
Jan 20 19:04:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 19:04:24 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:24 compute-0 podman[96480]: 2026-01-20 19:04:24.562299851 +0000 UTC m=+0.038893386 container create b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:04:24 compute-0 systemd[1]: Started libpod-conmon-b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18.scope.
Jan 20 19:04:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:24 compute-0 podman[96480]: 2026-01-20 19:04:24.54670317 +0000 UTC m=+0.023296725 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:24 compute-0 podman[96480]: 2026-01-20 19:04:24.647153403 +0000 UTC m=+0.123746958 container init b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 19:04:24 compute-0 podman[96480]: 2026-01-20 19:04:24.660765588 +0000 UTC m=+0.137359123 container start b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 19:04:24 compute-0 podman[96480]: 2026-01-20 19:04:24.664309351 +0000 UTC m=+0.140902886 container attach b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 19:04:24 compute-0 ceph-mon[75120]: pgmap v86: 11 pgs: 1 unknown, 10 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 1.9 KiB/s wr, 4 op/s
Jan 20 19:04:24 compute-0 ceph-mon[75120]: mds.? [v2:192.168.122.100:6814/78182875,v1:192.168.122.100:6815/78182875] up:active
Jan 20 19:04:24 compute-0 ceph-mon[75120]: fsmap cephfs:1 {0=cephfs.compute-0.djcctc=up:active}
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:04:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:25 compute-0 stoic_antonelli[96497]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:04:25 compute-0 stoic_antonelli[96497]: --> All data devices are unavailable
Jan 20 19:04:25 compute-0 sudo[96539]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgpfosvnnjpheolpxkmxrffnscqoxakw ; /usr/bin/python3'
Jan 20 19:04:25 compute-0 sudo[96539]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:25 compute-0 systemd[1]: libpod-b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18.scope: Deactivated successfully.
Jan 20 19:04:25 compute-0 podman[96480]: 2026-01-20 19:04:25.121894691 +0000 UTC m=+0.598488236 container died b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-0982d555c9d89ca20652dae46231262f78de8e50b0ab285601790aa2013b8183-merged.mount: Deactivated successfully.
Jan 20 19:04:25 compute-0 podman[96480]: 2026-01-20 19:04:25.161743581 +0000 UTC m=+0.638337116 container remove b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_antonelli, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:04:25 compute-0 systemd[1]: libpod-conmon-b9bf886b77e840449ab0fbb8f55bb5a7f8444caf2c42afa4f19c2d2b9dfb8a18.scope: Deactivated successfully.
Jan 20 19:04:25 compute-0 sudo[96388]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:25 compute-0 python3[96542]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:25 compute-0 sudo[96553]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:25 compute-0 sudo[96553]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:25 compute-0 sudo[96553]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:25 compute-0 podman[96571]: 2026-01-20 19:04:25.325743208 +0000 UTC m=+0.074226909 container create 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 19:04:25 compute-0 systemd[1]: Started libpod-conmon-134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06.scope.
Jan 20 19:04:25 compute-0 sudo[96589]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:04:25 compute-0 sudo[96589]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:25 compute-0 podman[96571]: 2026-01-20 19:04:25.276684049 +0000 UTC m=+0.025167810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6cf58f95daac689ba7f2c88dcdf2f95cff4d7be6de2be95298cba4000cfc63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6cf58f95daac689ba7f2c88dcdf2f95cff4d7be6de2be95298cba4000cfc63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:25 compute-0 podman[96571]: 2026-01-20 19:04:25.414503322 +0000 UTC m=+0.162987083 container init 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:25 compute-0 podman[96571]: 2026-01-20 19:04:25.426650071 +0000 UTC m=+0.175133782 container start 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:04:25 compute-0 podman[96571]: 2026-01-20 19:04:25.430588175 +0000 UTC m=+0.179071926 container attach 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:04:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 13 KiB/s wr, 243 op/s
Jan 20 19:04:25 compute-0 podman[96654]: 2026-01-20 19:04:25.703301741 +0000 UTC m=+0.058500394 container create d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 19:04:25 compute-0 ceph-mon[75120]: from='client.14262 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:04:25 compute-0 systemd[1]: Started libpod-conmon-d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390.scope.
Jan 20 19:04:25 compute-0 podman[96654]: 2026-01-20 19:04:25.677577738 +0000 UTC m=+0.032776471 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:25 compute-0 podman[96654]: 2026-01-20 19:04:25.799501102 +0000 UTC m=+0.154699785 container init d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:04:25 compute-0 podman[96654]: 2026-01-20 19:04:25.806509269 +0000 UTC m=+0.161707912 container start d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:25 compute-0 pedantic_hopper[96671]: 167 167
Jan 20 19:04:25 compute-0 podman[96654]: 2026-01-20 19:04:25.809983243 +0000 UTC m=+0.165181886 container attach d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:25 compute-0 systemd[1]: libpod-d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390.scope: Deactivated successfully.
Jan 20 19:04:25 compute-0 podman[96654]: 2026-01-20 19:04:25.811190391 +0000 UTC m=+0.166389034 container died d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 19:04:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-8f7ddb83a8a6af510785bb39b8e125cf2bea071318d8d4f6def881e58d8f4889-merged.mount: Deactivated successfully.
Jan 20 19:04:25 compute-0 podman[96654]: 2026-01-20 19:04:25.850314912 +0000 UTC m=+0.205513555 container remove d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:04:25 compute-0 systemd[1]: libpod-conmon-d3520420db696fa78763a7f47bcdf5d268d9474f315a9defe2f47c554a361390.scope: Deactivated successfully.
Jan 20 19:04:25 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:04:25 compute-0 goofy_ellis[96618]: 
Jan 20 19:04:25 compute-0 goofy_ellis[96618]: [{"container_id": "6869885aa1d5", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "0.20%", "created": "2026-01-20T19:03:00.062927Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2026-01-20T19:03:00.382853Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830291Z", "memory_usage": 7808745, "pending_daemon_config": false, "ports": [], "service_name": "crash", "started": "2026-01-20T19:02:59.224839Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@crash.compute-0", "version": "20.2.0"}, {"container_id": "83d8b470dcb9", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "3.76%", "created": "2026-01-20T19:04:21.098040Z", "daemon_id": "cephfs.compute-0.djcctc", "daemon_name": "mds.cephfs.compute-0.djcctc", "daemon_type": "mds", "events": ["2026-01-20T19:04:21.171052Z daemon:mds.cephfs.compute-0.djcctc [INFO] \"Deployed mds.cephfs.compute-0.djcctc on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830818Z", "memory_usage": 16001269, "pending_daemon_config": false, "ports": [], "service_name": "mds.cephfs", "started": "2026-01-20T19:04:21.018164Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@mds.cephfs.compute-0.djcctc", "version": "20.2.0"}, {"container_id": "60642dffa907", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "16.08%", "created": "2026-01-20T19:02:09.582754Z", "daemon_id": "compute-0.meyjbf", "daemon_name": "mgr.compute-0.meyjbf", "daemon_type": "mgr", "events": ["2026-01-20T19:03:06.415164Z daemon:mgr.compute-0.meyjbf [INFO] \"Reconfigured mgr.compute-0.meyjbf on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830193Z", "memory_usage": 549139251, "pending_daemon_config": false, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-01-20T19:02:09.191123Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@mgr.compute-0.meyjbf", "version": "20.2.0"}, {"container_id": "b5c99f106188", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph:v20", "cpu_percentage": "2.66%", "created": "2026-01-20T19:02:04.845645Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2026-01-20T19:03:05.023681Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830061Z", "memory_request": 2147483648, "memory_usage": 42739957, "pending_daemon_config": false, "ports": [], "service_name": "mon", "started": "2026-01-20T19:02:07.125921Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@mon.compute-0", "version": "20.2.0"}, {"container_id": "eabc59bf78c2", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "1.84%", "created": "2026-01-20T19:03:31.006883Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2026-01-20T19:03:31.072531Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830415Z", "memory_request": 4294967296, "memory_usage": 60628664, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T19:03:30.915540Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@osd.0", "version": "20.2.0"}, {"container_id": "bfb3a392dadb", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.72%", "created": "2026-01-20T19:03:35.234981Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2026-01-20T19:03:35.344145Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830512Z", "memory_request": 4294967296, "memory_usage": 59663974, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T19:03:35.071683Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@osd.1", "version": "20.2.0"}, {"container_id": "d045a60defb8", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "2.65%", "created": "2026-01-20T19:03:41.789132Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2026-01-20T19:03:41.897662Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830605Z", "memory_request": 4294967296, "memory_usage": 57860423, "pending_daemon_config": false, "ports": [], "service_name": "osd.default_drive_group", "started": "2026-01-20T19:03:41.681331Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@osd.2", "version": "20.2.0"}, {"container_id": "f7b32e8a4eac", "container_image_digests": ["quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "quay.io/ceph/ceph@sha256:4c65c801a8e5e5704934118b2c723e7233f2b5de8552bfc8f129dabe1fced0b1"], "container_image_id": "524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3", "container_image_name": "quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86", "cpu_percentage": "3.77%", "created": "2026-01-20T19:04:13.035854Z", "daemon_id": "rgw.compute-0.dbzrzk", "daemon_name": "rgw.rgw.compute-0.dbzrzk", "daemon_type": "rgw", "events": ["2026-01-20T19:04:13.117924Z daemon:rgw.rgw.compute-0.dbzrzk [INFO] \"Deployed rgw.rgw.compute-0.dbzrzk on host 'compute-0'\""], "hostname": "compute-0", "ip": "192.168.122.100", "is_active": false, "last_refresh": "2026-01-20T19:04:23.830698Z", "memory_usage": 100631838, "pending_daemon_config": true, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-01-20T19:04:12.892976Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-90fff835-31df-513f-a409-b6642f04e6ac@rgw.rgw.compute-0.dbzrzk", "version": "20.2.0"}]
Jan 20 19:04:25 compute-0 systemd[1]: libpod-134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06.scope: Deactivated successfully.
Jan 20 19:04:25 compute-0 podman[96571]: 2026-01-20 19:04:25.890750876 +0000 UTC m=+0.639234537 container died 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-dc6cf58f95daac689ba7f2c88dcdf2f95cff4d7be6de2be95298cba4000cfc63-merged.mount: Deactivated successfully.
Jan 20 19:04:26 compute-0 rsyslogd[1007]: message too long (8843) with configured size 8096, begin of message is: [{"container_id": "6869885aa1d5", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 20 19:04:26 compute-0 podman[96571]: 2026-01-20 19:04:26.046933617 +0000 UTC m=+0.795417298 container remove 134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06 (image=quay.io/ceph/ceph:v20, name=goofy_ellis, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 19:04:26 compute-0 sudo[96539]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:26 compute-0 podman[96708]: 2026-01-20 19:04:26.03368274 +0000 UTC m=+0.042347219 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:26 compute-0 podman[96708]: 2026-01-20 19:04:26.169765973 +0000 UTC m=+0.178430472 container create 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 19:04:26 compute-0 systemd[1]: libpod-conmon-134dda3fc3a040767669cf6df8ef3c6e86b85fa6f65622e69cef9a248b953c06.scope: Deactivated successfully.
Jan 20 19:04:26 compute-0 systemd[1]: Started libpod-conmon-10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8.scope.
Jan 20 19:04:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:26 compute-0 podman[96708]: 2026-01-20 19:04:26.275541692 +0000 UTC m=+0.284206171 container init 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 19:04:26 compute-0 podman[96708]: 2026-01-20 19:04:26.281564725 +0000 UTC m=+0.290229184 container start 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:04:26 compute-0 podman[96708]: 2026-01-20 19:04:26.284470654 +0000 UTC m=+0.293135113 container attach 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]: {
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:     "0": [
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:         {
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "devices": [
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "/dev/loop3"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             ],
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_name": "ceph_lv0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_size": "21470642176",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "name": "ceph_lv0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "tags": {
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.crush_device_class": "",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.encrypted": "0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osd_id": "0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.type": "block",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.vdo": "0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.with_tpm": "0"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             },
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "type": "block",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "vg_name": "ceph_vg0"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:         }
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:     ],
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:     "1": [
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:         {
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "devices": [
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "/dev/loop4"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             ],
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_name": "ceph_lv1",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_size": "21470642176",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "name": "ceph_lv1",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "tags": {
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.crush_device_class": "",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.encrypted": "0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osd_id": "1",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.type": "block",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.vdo": "0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.with_tpm": "0"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             },
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "type": "block",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "vg_name": "ceph_vg1"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:         }
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:     ],
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:     "2": [
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:         {
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "devices": [
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "/dev/loop5"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             ],
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_name": "ceph_lv2",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_size": "21470642176",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "name": "ceph_lv2",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "tags": {
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.crush_device_class": "",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.encrypted": "0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osd_id": "2",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.type": "block",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.vdo": "0",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:                 "ceph.with_tpm": "0"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             },
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "type": "block",
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:             "vg_name": "ceph_vg2"
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:         }
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]:     ]
Jan 20 19:04:26 compute-0 wizardly_snyder[96726]: }
Jan 20 19:04:26 compute-0 systemd[1]: libpod-10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8.scope: Deactivated successfully.
Jan 20 19:04:26 compute-0 podman[96708]: 2026-01-20 19:04:26.633299154 +0000 UTC m=+0.641963643 container died 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:04:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-be5a05c222b32a33dac40d6adcc94a61a081325e98a7a321a7bf22065697f561-merged.mount: Deactivated successfully.
Jan 20 19:04:26 compute-0 podman[96708]: 2026-01-20 19:04:26.733771927 +0000 UTC m=+0.742436386 container remove 10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_snyder, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle)
Jan 20 19:04:26 compute-0 ceph-mon[75120]: pgmap v87: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 90 KiB/s rd, 13 KiB/s wr, 243 op/s
Jan 20 19:04:26 compute-0 systemd[1]: libpod-conmon-10f3fe0892559ab44b91d406b1e91bd14d7740921f5129e9388216fa855116e8.scope: Deactivated successfully.
Jan 20 19:04:26 compute-0 sudo[96589]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:26 compute-0 sudo[96746]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:26 compute-0 sudo[96746]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:26 compute-0 sudo[96792]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpsdlnwkklztylhedzexdnawjostddhy ; /usr/bin/python3'
Jan 20 19:04:26 compute-0 sudo[96792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:26 compute-0 sudo[96746]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:26 compute-0 sudo[96797]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:04:26 compute-0 sudo[96797]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:27 compute-0 python3[96796]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:27 compute-0 podman[96822]: 2026-01-20 19:04:27.108971214 +0000 UTC m=+0.051557039 container create 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:27 compute-0 systemd[1]: Started libpod-conmon-0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7.scope.
Jan 20 19:04:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad475889a942c1ad5d235e7b14691fd5920e4b889713cc1230b6b4d34f8b4c2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dad475889a942c1ad5d235e7b14691fd5920e4b889713cc1230b6b4d34f8b4c2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:27 compute-0 podman[96822]: 2026-01-20 19:04:27.085342271 +0000 UTC m=+0.027928136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:27 compute-0 podman[96822]: 2026-01-20 19:04:27.187843464 +0000 UTC m=+0.130429299 container init 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:04:27 compute-0 podman[96822]: 2026-01-20 19:04:27.195439354 +0000 UTC m=+0.138025169 container start 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:27 compute-0 podman[96822]: 2026-01-20 19:04:27.200372182 +0000 UTC m=+0.142957997 container attach 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:04:27 compute-0 podman[96850]: 2026-01-20 19:04:27.222598511 +0000 UTC m=+0.037701798 container create 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:04:27 compute-0 systemd[1]: Started libpod-conmon-4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2.scope.
Jan 20 19:04:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:27 compute-0 podman[96850]: 2026-01-20 19:04:27.291201395 +0000 UTC m=+0.106304702 container init 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:04:27 compute-0 podman[96850]: 2026-01-20 19:04:27.298383747 +0000 UTC m=+0.113487034 container start 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:04:27 compute-0 adoring_nightingale[96868]: 167 167
Jan 20 19:04:27 compute-0 podman[96850]: 2026-01-20 19:04:27.301560473 +0000 UTC m=+0.116663760 container attach 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:04:27 compute-0 systemd[1]: libpod-4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2.scope: Deactivated successfully.
Jan 20 19:04:27 compute-0 podman[96850]: 2026-01-20 19:04:27.302630348 +0000 UTC m=+0.117733655 container died 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:04:27 compute-0 podman[96850]: 2026-01-20 19:04:27.205873844 +0000 UTC m=+0.020977161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-4c7d836dc5bdbc06d7a84fcf109bc5ae5ba22221c72e6c84c3fd3b4b0d57c8eb-merged.mount: Deactivated successfully.
Jan 20 19:04:27 compute-0 podman[96850]: 2026-01-20 19:04:27.342407945 +0000 UTC m=+0.157511232 container remove 4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_nightingale, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:27 compute-0 systemd[1]: libpod-conmon-4e658ce24bb5f2995bfd1881db6c1305b49ffd4f2af97ca573fba724e789e9b2.scope: Deactivated successfully.
Jan 20 19:04:27 compute-0 podman[96912]: 2026-01-20 19:04:27.509552216 +0000 UTC m=+0.053571656 container create dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 19:04:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 10 KiB/s wr, 192 op/s
Jan 20 19:04:27 compute-0 systemd[1]: Started libpod-conmon-dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4.scope.
Jan 20 19:04:27 compute-0 podman[96912]: 2026-01-20 19:04:27.483385314 +0000 UTC m=+0.027404764 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:27 compute-0 podman[96912]: 2026-01-20 19:04:27.610598794 +0000 UTC m=+0.154618254 container init dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:04:27 compute-0 podman[96912]: 2026-01-20 19:04:27.618063842 +0000 UTC m=+0.162083262 container start dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 20 19:04:27 compute-0 podman[96912]: 2026-01-20 19:04:27.632791262 +0000 UTC m=+0.176810692 container attach dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:04:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Jan 20 19:04:27 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2946416481' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 19:04:27 compute-0 reverent_diffie[96844]: 
Jan 20 19:04:27 compute-0 reverent_diffie[96844]: {"fsid":"90fff835-31df-513f-a409-b6642f04e6ac","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":140,"monmap":{"epoch":1,"min_mon_release_name":"tentacle","num_mons":1},"osdmap":{"epoch":39,"num_osds":3,"num_up_osds":3,"osd_up_since":1768935829,"num_in_osds":3,"osd_in_since":1768935800,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":11}],"num_pgs":11,"num_pools":11,"num_objects":249,"data_bytes":472000,"bytes_used":84451328,"bytes_avail":64327475200,"bytes_total":64411926528,"read_bytes_sec":92474,"write_bytes_sec":13308,"read_op_per_sec":151,"write_op_per_sec":92},"fsmap":{"epoch":5,"btime":"2026-01-20T19:04:23:700833+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.djcctc","status":"up:active","gid":14258}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs"],"services":{}},"servicemap":{"epoch":3,"modified":"2026-01-20T19:04:23.524411+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.djcctc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14250":{"start_epoch":3,"start_stamp":"2026-01-20T19:04:23.021828+0000","gid":14250,"addr":"192.168.122.100:0/3430692269","metadata":{"arch":"x86_64","ceph_release":"tentacle","ceph_version":"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)","ceph_version_short":"20.2.0","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.dbzrzk","kernel_description":"#1 SMP PREEMPT_DYNAMIC Fri Jan 16 09:19:22 UTC 2026","kernel_version":"5.14.0-661.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864312","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"6427199d-52d7-4810-99bf-ec966a7007f4","zone_name":"default","zonegroup_id":"7f3fa8c0-913b-4a23-89e0-2cf7070dd47e","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"b6bedcb9-562c-42b4-be71-c432b8518626":{"message":"Global Recovery Event (5s)\n      [=========================...] ","progress":0.90909093618392944,"add_to_ceph_s":true}}}
Jan 20 19:04:27 compute-0 ceph-mds[95894]: mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 20 19:04:27 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mds-cephfs-compute-0-djcctc[95889]: 2026-01-20T19:04:27.717+0000 7f97ce1b8640 -1 mds.pinger is_rank_lagging: rank=0 was never sent ping request.
Jan 20 19:04:27 compute-0 systemd[1]: libpod-0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7.scope: Deactivated successfully.
Jan 20 19:04:27 compute-0 podman[96822]: 2026-01-20 19:04:27.727056228 +0000 UTC m=+0.669642043 container died 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:04:27 compute-0 ceph-mon[75120]: from='client.14264 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Jan 20 19:04:27 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2946416481' entity='client.admin' cmd={"prefix": "status", "format": "json"} : dispatch
Jan 20 19:04:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-dad475889a942c1ad5d235e7b14691fd5920e4b889713cc1230b6b4d34f8b4c2-merged.mount: Deactivated successfully.
Jan 20 19:04:27 compute-0 podman[96822]: 2026-01-20 19:04:27.769907728 +0000 UTC m=+0.712493543 container remove 0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7 (image=quay.io/ceph/ceph:v20, name=reverent_diffie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:27 compute-0 systemd[1]: libpod-conmon-0812509adc65169980f08dd9d83a0fd597e3b63162b953e3690d76b78a9b70b7.scope: Deactivated successfully.
Jan 20 19:04:27 compute-0 sudo[96792]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:28 compute-0 lvm[97022]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:04:28 compute-0 lvm[97023]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:04:28 compute-0 lvm[97023]: VG ceph_vg1 finished
Jan 20 19:04:28 compute-0 lvm[97022]: VG ceph_vg0 finished
Jan 20 19:04:28 compute-0 lvm[97025]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:04:28 compute-0 lvm[97025]: VG ceph_vg2 finished
Jan 20 19:04:28 compute-0 lvm[97026]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:04:28 compute-0 lvm[97026]: VG ceph_vg0 finished
Jan 20 19:04:28 compute-0 inspiring_moser[96929]: {}
Jan 20 19:04:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:28 compute-0 systemd[1]: libpod-dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4.scope: Deactivated successfully.
Jan 20 19:04:28 compute-0 systemd[1]: libpod-dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4.scope: Consumed 1.343s CPU time.
Jan 20 19:04:28 compute-0 podman[96912]: 2026-01-20 19:04:28.478868617 +0000 UTC m=+1.022888047 container died dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:28 compute-0 sudo[97064]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvltfbtwlydoxngylvolanunttsqmhgr ; /usr/bin/python3'
Jan 20 19:04:28 compute-0 sudo[97064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-be001c42c9842634ddd1f9f5e1aa70533e981b774a54d14784bfaf688c3ab411-merged.mount: Deactivated successfully.
Jan 20 19:04:28 compute-0 podman[96912]: 2026-01-20 19:04:28.681489623 +0000 UTC m=+1.225509023 container remove dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:28 compute-0 systemd[1]: libpod-conmon-dc06c8d9d04d26a18b310c990bf540aa8119a2099452251a91187dae1aafcfc4.scope: Deactivated successfully.
Jan 20 19:04:28 compute-0 sudo[96797]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:28 compute-0 python3[97066]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:28 compute-0 ceph-mon[75120]: pgmap v88: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 10 KiB/s wr, 192 op/s
Jan 20 19:04:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:28 compute-0 sudo[97068]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:04:28 compute-0 sudo[97068]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:28 compute-0 sudo[97068]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:28 compute-0 podman[97070]: 2026-01-20 19:04:28.808029997 +0000 UTC m=+0.039010280 container create b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:04:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:28 compute-0 systemd[1]: Started libpod-conmon-b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b.scope.
Jan 20 19:04:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52c10d548a4c49373142a83379de11baa479ac684c629d9cb08976dee370c91/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52c10d548a4c49373142a83379de11baa479ac684c629d9cb08976dee370c91/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:28 compute-0 sudo[97108]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:28 compute-0 sudo[97108]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:28 compute-0 sudo[97108]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:28 compute-0 podman[97070]: 2026-01-20 19:04:28.790740735 +0000 UTC m=+0.021721038 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:28 compute-0 podman[97070]: 2026-01-20 19:04:28.887865679 +0000 UTC m=+0.118845962 container init b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 19:04:28 compute-0 podman[97070]: 2026-01-20 19:04:28.898970624 +0000 UTC m=+0.129950907 container start b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:28 compute-0 podman[97070]: 2026-01-20 19:04:28.904641809 +0000 UTC m=+0.135622092 container attach b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:28 compute-0 sudo[97136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:04:28 compute-0 sudo[97136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Jan 20 19:04:29 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479330853' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:04:29 compute-0 xenodochial_shannon[97113]: 
Jan 20 19:04:29 compute-0 xenodochial_shannon[97113]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dbzrzk","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Jan 20 19:04:29 compute-0 systemd[1]: libpod-b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b.scope: Deactivated successfully.
Jan 20 19:04:29 compute-0 podman[97070]: 2026-01-20 19:04:29.344849052 +0000 UTC m=+0.575829335 container died b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:04:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-c52c10d548a4c49373142a83379de11baa479ac684c629d9cb08976dee370c91-merged.mount: Deactivated successfully.
Jan 20 19:04:29 compute-0 podman[97227]: 2026-01-20 19:04:29.372508978 +0000 UTC m=+0.068766171 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:04:29 compute-0 podman[97070]: 2026-01-20 19:04:29.383769335 +0000 UTC m=+0.614749618 container remove b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b (image=quay.io/ceph/ceph:v20, name=xenodochial_shannon, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:04:29 compute-0 systemd[1]: libpod-conmon-b99225ff191eb77141e3f5b8b28d74e8da7f928e4d9a4f308008f8a33df1255b.scope: Deactivated successfully.
Jan 20 19:04:29 compute-0 sudo[97064]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:29 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event b6bedcb9-562c-42b4-be71-c432b8518626 (Global Recovery Event) in 10 seconds
Jan 20 19:04:29 compute-0 podman[97227]: 2026-01-20 19:04:29.510872718 +0000 UTC m=+0.207129871 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 9.1 KiB/s wr, 170 op/s
Jan 20 19:04:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:29 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/479330853' entity='client.admin' cmd={"prefix": "config dump", "format": "json"} : dispatch
Jan 20 19:04:29 compute-0 ceph-mon[75120]: pgmap v89: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 63 KiB/s rd, 9.1 KiB/s wr, 170 op/s
Jan 20 19:04:30 compute-0 sudo[97453]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlqxddegpvxdoxqugehyvsvhlmfakpvc ; /usr/bin/python3'
Jan 20 19:04:30 compute-0 sudo[97453]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:30 compute-0 sudo[97136]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:30 compute-0 sudo[97456]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:30 compute-0 sudo[97456]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:30 compute-0 sudo[97456]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:30 compute-0 python3[97455]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:30 compute-0 sudo[97481]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:04:30 compute-0 sudo[97481]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:30 compute-0 podman[97501]: 2026-01-20 19:04:30.493631566 +0000 UTC m=+0.040104442 container create 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:30 compute-0 systemd[1]: Started libpod-conmon-90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86.scope.
Jan 20 19:04:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14286c1501ac3b48a08609fcf6d0e98cbeff4792cab107c9900ec16eb5829d63/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14286c1501ac3b48a08609fcf6d0e98cbeff4792cab107c9900ec16eb5829d63/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:30 compute-0 podman[97501]: 2026-01-20 19:04:30.47611863 +0000 UTC m=+0.022591506 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:30 compute-0 podman[97501]: 2026-01-20 19:04:30.577699979 +0000 UTC m=+0.124172965 container init 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:04:30 compute-0 podman[97501]: 2026-01-20 19:04:30.583789433 +0000 UTC m=+0.130262309 container start 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:30 compute-0 podman[97501]: 2026-01-20 19:04:30.587170153 +0000 UTC m=+0.133643129 container attach 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:04:30 compute-0 podman[97546]: 2026-01-20 19:04:30.732782415 +0000 UTC m=+0.040342897 container create b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:30 compute-0 systemd[1]: Started libpod-conmon-b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91.scope.
Jan 20 19:04:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:30 compute-0 podman[97546]: 2026-01-20 19:04:30.802466827 +0000 UTC m=+0.110027309 container init b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:30 compute-0 podman[97546]: 2026-01-20 19:04:30.807095597 +0000 UTC m=+0.114656079 container start b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:30 compute-0 podman[97546]: 2026-01-20 19:04:30.712218968 +0000 UTC m=+0.019779460 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:30 compute-0 podman[97546]: 2026-01-20 19:04:30.810507398 +0000 UTC m=+0.118067880 container attach b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:04:30 compute-0 eloquent_jemison[97572]: 167 167
Jan 20 19:04:30 compute-0 systemd[1]: libpod-b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91.scope: Deactivated successfully.
Jan 20 19:04:30 compute-0 podman[97546]: 2026-01-20 19:04:30.812379792 +0000 UTC m=+0.119940274 container died b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:04:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-0e4724f028626e07355c8f70411b39be66b5f930023c34356c61d4edd8f85787-merged.mount: Deactivated successfully.
Jan 20 19:04:30 compute-0 podman[97546]: 2026-01-20 19:04:30.843774227 +0000 UTC m=+0.151334709 container remove b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eloquent_jemison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:04:30 compute-0 systemd[1]: libpod-conmon-b5e58a0a3bc0efa77d4a32ee1ad43957433c4ded14e175fbb9491967bb919b91.scope: Deactivated successfully.
Jan 20 19:04:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Jan 20 19:04:30 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/136249076' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 20 19:04:30 compute-0 happy_jones[97521]: mimic
Jan 20 19:04:30 compute-0 podman[97596]: 2026-01-20 19:04:30.987299388 +0000 UTC m=+0.039655500 container create 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 19:04:30 compute-0 systemd[1]: libpod-90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86.scope: Deactivated successfully.
Jan 20 19:04:31 compute-0 systemd[1]: Started libpod-conmon-5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff.scope.
Jan 20 19:04:31 compute-0 podman[97612]: 2026-01-20 19:04:31.04683469 +0000 UTC m=+0.037724165 container died 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:31 compute-0 podman[97596]: 2026-01-20 19:04:30.967256674 +0000 UTC m=+0.019612796 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-14286c1501ac3b48a08609fcf6d0e98cbeff4792cab107c9900ec16eb5829d63-merged.mount: Deactivated successfully.
Jan 20 19:04:31 compute-0 podman[97596]: 2026-01-20 19:04:31.079711859 +0000 UTC m=+0.132067971 container init 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Jan 20 19:04:31 compute-0 podman[97596]: 2026-01-20 19:04:31.085454915 +0000 UTC m=+0.137811027 container start 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:31 compute-0 podman[97596]: 2026-01-20 19:04:31.089199124 +0000 UTC m=+0.141555236 container attach 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:31 compute-0 podman[97612]: 2026-01-20 19:04:31.093417354 +0000 UTC m=+0.084306799 container remove 90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86 (image=quay.io/ceph/ceph:v20, name=happy_jones, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:31 compute-0 systemd[1]: libpod-conmon-90d48cc835946cec2d0baa2b34496cc9b33941226d2a1a9091cb5acb9194df86.scope: Deactivated successfully.
Jan 20 19:04:31 compute-0 sudo[97453]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:04:31 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/136249076' entity='client.admin' cmd={"prefix": "osd get-require-min-compat-client"} : dispatch
Jan 20 19:04:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:04:31
Jan 20 19:04:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:04:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:04:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'vms', 'volumes', 'cephfs.cephfs.data', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control']
Jan 20 19:04:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:04:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 7.8 KiB/s wr, 146 op/s
Jan 20 19:04:31 compute-0 wizardly_galois[97625]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:04:31 compute-0 wizardly_galois[97625]: --> All data devices are unavailable
Jan 20 19:04:31 compute-0 systemd[1]: libpod-5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff.scope: Deactivated successfully.
Jan 20 19:04:31 compute-0 podman[97596]: 2026-01-20 19:04:31.613560686 +0000 UTC m=+0.665916838 container died 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:04:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ecb28cda8a6ea004a2209483e6f57e440d67b414a8cdcad9e6cdde5de1fc905-merged.mount: Deactivated successfully.
Jan 20 19:04:31 compute-0 podman[97596]: 2026-01-20 19:04:31.67109883 +0000 UTC m=+0.723454982 container remove 5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=wizardly_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:04:31 compute-0 systemd[1]: libpod-conmon-5d84898b60756c57d880b459ed20a1c9fca37465e3a83fd0f994cd07045b17ff.scope: Deactivated successfully.
Jan 20 19:04:31 compute-0 sudo[97481]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:31 compute-0 sudo[97661]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:31 compute-0 sudo[97661]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:31 compute-0 sudo[97661]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:31 compute-0 sudo[97686]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:04:31 compute-0 sudo[97686]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:32 compute-0 sudo[97734]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdmesfwdygrdqrapasdhmrspxcreonww ; /usr/bin/python3'
Jan 20 19:04:32 compute-0 sudo[97734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:04:32 compute-0 python3[97736]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:04:32 compute-0 podman[97750]: 2026-01-20 19:04:32.247193866 +0000 UTC m=+0.053638682 container create 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:04:32 compute-0 podman[97751]: 2026-01-20 19:04:32.282955405 +0000 UTC m=+0.068884465 container create 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:32 compute-0 systemd[1]: Started libpod-conmon-5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc.scope.
Jan 20 19:04:32 compute-0 systemd[1]: Started libpod-conmon-7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211.scope.
Jan 20 19:04:32 compute-0 podman[97750]: 2026-01-20 19:04:32.215088215 +0000 UTC m=+0.021533051 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:32 compute-0 ceph-mon[75120]: pgmap v90: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 54 KiB/s rd, 7.8 KiB/s wr, 146 op/s
Jan 20 19:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c9950af01ec4add59d76be3cb3d92c73be1024da4d64510274e271f00d7fed/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56c9950af01ec4add59d76be3cb3d92c73be1024da4d64510274e271f00d7fed/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:32 compute-0 podman[97750]: 2026-01-20 19:04:32.349115263 +0000 UTC m=+0.155560099 container init 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:04:32 compute-0 podman[97751]: 2026-01-20 19:04:32.261162128 +0000 UTC m=+0.047091228 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:32 compute-0 podman[97751]: 2026-01-20 19:04:32.353796523 +0000 UTC m=+0.139725603 container init 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:04:32 compute-0 podman[97750]: 2026-01-20 19:04:32.3574251 +0000 UTC m=+0.163869916 container start 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:32 compute-0 podman[97751]: 2026-01-20 19:04:32.358985347 +0000 UTC m=+0.144914407 container start 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:04:32 compute-0 podman[97750]: 2026-01-20 19:04:32.360566524 +0000 UTC m=+0.167011350 container attach 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:32 compute-0 systemd[1]: libpod-7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211.scope: Deactivated successfully.
Jan 20 19:04:32 compute-0 magical_lalande[97783]: 167 167
Jan 20 19:04:32 compute-0 conmon[97783]: conmon 7b08734b1003880d5d5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211.scope/container/memory.events
Jan 20 19:04:32 compute-0 podman[97751]: 2026-01-20 19:04:32.365182373 +0000 UTC m=+0.151111433 container attach 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:32 compute-0 podman[97751]: 2026-01-20 19:04:32.365500302 +0000 UTC m=+0.151429362 container died 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-a498b444b816cd71ff8142c1c6fc3132cd678eee18db97ed11aff1951bdcfa6f-merged.mount: Deactivated successfully.
Jan 20 19:04:32 compute-0 podman[97751]: 2026-01-20 19:04:32.402104119 +0000 UTC m=+0.188033179 container remove 7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_lalande, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:04:32 compute-0 systemd[1]: libpod-conmon-7b08734b1003880d5d5b402f8b5d7e183a222ddc251af0429dc83b2dd48ef211.scope: Deactivated successfully.
Jan 20 19:04:32 compute-0 podman[97828]: 2026-01-20 19:04:32.558272311 +0000 UTC m=+0.048068320 container create 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:32 compute-0 systemd[1]: Started libpod-conmon-82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656.scope.
Jan 20 19:04:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:32 compute-0 podman[97828]: 2026-01-20 19:04:32.536870144 +0000 UTC m=+0.026666243 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:32 compute-0 podman[97828]: 2026-01-20 19:04:32.639076697 +0000 UTC m=+0.128872726 container init 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:04:32 compute-0 podman[97828]: 2026-01-20 19:04:32.647917926 +0000 UTC m=+0.137713935 container start 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:32 compute-0 podman[97828]: 2026-01-20 19:04:32.651116173 +0000 UTC m=+0.140912182 container attach 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:32 compute-0 competent_bell[97845]: {
Jan 20 19:04:32 compute-0 competent_bell[97845]:     "0": [
Jan 20 19:04:32 compute-0 competent_bell[97845]:         {
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "devices": [
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "/dev/loop3"
Jan 20 19:04:32 compute-0 competent_bell[97845]:             ],
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_name": "ceph_lv0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_size": "21470642176",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "name": "ceph_lv0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "tags": {
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.crush_device_class": "",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.encrypted": "0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osd_id": "0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.type": "block",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.vdo": "0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.with_tpm": "0"
Jan 20 19:04:32 compute-0 competent_bell[97845]:             },
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "type": "block",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "vg_name": "ceph_vg0"
Jan 20 19:04:32 compute-0 competent_bell[97845]:         }
Jan 20 19:04:32 compute-0 competent_bell[97845]:     ],
Jan 20 19:04:32 compute-0 competent_bell[97845]:     "1": [
Jan 20 19:04:32 compute-0 competent_bell[97845]:         {
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "devices": [
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "/dev/loop4"
Jan 20 19:04:32 compute-0 competent_bell[97845]:             ],
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_name": "ceph_lv1",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_size": "21470642176",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "name": "ceph_lv1",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "tags": {
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.crush_device_class": "",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.encrypted": "0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osd_id": "1",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.type": "block",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.vdo": "0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.with_tpm": "0"
Jan 20 19:04:32 compute-0 competent_bell[97845]:             },
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "type": "block",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "vg_name": "ceph_vg1"
Jan 20 19:04:32 compute-0 competent_bell[97845]:         }
Jan 20 19:04:32 compute-0 competent_bell[97845]:     ],
Jan 20 19:04:32 compute-0 competent_bell[97845]:     "2": [
Jan 20 19:04:32 compute-0 competent_bell[97845]:         {
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "devices": [
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "/dev/loop5"
Jan 20 19:04:32 compute-0 competent_bell[97845]:             ],
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_name": "ceph_lv2",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_size": "21470642176",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "name": "ceph_lv2",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "tags": {
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.cluster_name": "ceph",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.crush_device_class": "",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.encrypted": "0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.objectstore": "bluestore",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osd_id": "2",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.type": "block",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.vdo": "0",
Jan 20 19:04:32 compute-0 competent_bell[97845]:                 "ceph.with_tpm": "0"
Jan 20 19:04:32 compute-0 competent_bell[97845]:             },
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "type": "block",
Jan 20 19:04:32 compute-0 competent_bell[97845]:             "vg_name": "ceph_vg2"
Jan 20 19:04:32 compute-0 competent_bell[97845]:         }
Jan 20 19:04:32 compute-0 competent_bell[97845]:     ]
Jan 20 19:04:32 compute-0 competent_bell[97845]: }
Jan 20 19:04:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Jan 20 19:04:32 compute-0 strange_meitner[97781]: 
Jan 20 19:04:32 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4206978851' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 20 19:04:32 compute-0 systemd[1]: libpod-82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656.scope: Deactivated successfully.
Jan 20 19:04:32 compute-0 podman[97828]: 2026-01-20 19:04:32.961181523 +0000 UTC m=+0.450977532 container died 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:32 compute-0 systemd[1]: libpod-5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc.scope: Deactivated successfully.
Jan 20 19:04:32 compute-0 strange_meitner[97781]: {"mon":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"mgr":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"osd":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":3},"mds":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"rgw":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":1},"overall":{"ceph version 20.2.0 (69f84cc2651aa259a15bc192ddaabd3baba07489) tentacle (stable - RelWithDebInfo)":7}}
Jan 20 19:04:32 compute-0 podman[97750]: 2026-01-20 19:04:32.971716463 +0000 UTC m=+0.778161299 container died 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:04:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7de4725e49d955d5a0e3240c77671a9d2dda1370a75fc15e756a2ee961e2a457-merged.mount: Deactivated successfully.
Jan 20 19:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-56c9950af01ec4add59d76be3cb3d92c73be1024da4d64510274e271f00d7fed-merged.mount: Deactivated successfully.
Jan 20 19:04:33 compute-0 podman[97750]: 2026-01-20 19:04:33.026205224 +0000 UTC m=+0.832650040 container remove 5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc (image=quay.io/ceph/ceph:v20, name=strange_meitner, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:33 compute-0 systemd[1]: libpod-conmon-5686f652b3ad7b4724c14f8aeaef8c5591cba0f628a0d324fe46c786a0eb23fc.scope: Deactivated successfully.
Jan 20 19:04:33 compute-0 podman[97828]: 2026-01-20 19:04:33.047781956 +0000 UTC m=+0.537577965 container remove 82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:04:33 compute-0 sudo[97734]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:33 compute-0 systemd[1]: libpod-conmon-82a9e75cc612c0b81a27589ab142857c15a5670d3fab1d731ef43f9faa01b656.scope: Deactivated successfully.
Jan 20 19:04:33 compute-0 sudo[97686]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:33 compute-0 sudo[97877]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:04:33 compute-0 sudo[97877]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:33 compute-0 sudo[97877]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:33 compute-0 sudo[97902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:04:33 compute-0 sudo[97902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:33 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/4206978851' entity='client.admin' cmd={"prefix": "versions", "format": "json"} : dispatch
Jan 20 19:04:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 7.2 KiB/s wr, 134 op/s
Jan 20 19:04:33 compute-0 podman[97940]: 2026-01-20 19:04:33.591981707 +0000 UTC m=+0.058742963 container create 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:04:33 compute-0 systemd[1]: Started libpod-conmon-613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7.scope.
Jan 20 19:04:33 compute-0 podman[97940]: 2026-01-20 19:04:33.568314676 +0000 UTC m=+0.035075932 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:33 compute-0 podman[97940]: 2026-01-20 19:04:33.683268431 +0000 UTC m=+0.150029707 container init 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:04:33 compute-0 podman[97940]: 2026-01-20 19:04:33.691590658 +0000 UTC m=+0.158351884 container start 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:04:33 compute-0 podman[97940]: 2026-01-20 19:04:33.695668105 +0000 UTC m=+0.162429511 container attach 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:33 compute-0 hopeful_booth[97957]: 167 167
Jan 20 19:04:33 compute-0 systemd[1]: libpod-613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7.scope: Deactivated successfully.
Jan 20 19:04:33 compute-0 podman[97940]: 2026-01-20 19:04:33.698710387 +0000 UTC m=+0.165471713 container died 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:04:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-a727be090044c58a609acb2410840f687c2670e1ed0110828291eb655ae46a73-merged.mount: Deactivated successfully.
Jan 20 19:04:33 compute-0 podman[97940]: 2026-01-20 19:04:33.740976839 +0000 UTC m=+0.207738085 container remove 613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hopeful_booth, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:04:33 compute-0 systemd[1]: libpod-conmon-613279ab17774fad2bab740f9f4ba722b04986c8053a62883d91c9cd587b1ca7.scope: Deactivated successfully.
Jan 20 19:04:33 compute-0 podman[97981]: 2026-01-20 19:04:33.933643707 +0000 UTC m=+0.051542413 container create 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:33 compute-0 systemd[1]: Started libpod-conmon-992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c.scope.
Jan 20 19:04:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:34 compute-0 podman[97981]: 2026-01-20 19:04:33.910525218 +0000 UTC m=+0.028423954 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:34 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:04:34 compute-0 podman[97981]: 2026-01-20 19:04:34.02193042 +0000 UTC m=+0.139829156 container init 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:04:34 compute-0 podman[97981]: 2026-01-20 19:04:34.031404133 +0000 UTC m=+0.149302839 container start 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:04:34 compute-0 podman[97981]: 2026-01-20 19:04:34.035550042 +0000 UTC m=+0.153448768 container attach 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:04:34 compute-0 ceph-mon[75120]: pgmap v91: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 7.2 KiB/s wr, 134 op/s
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 6.347116474187133e-07 of space, bias 4.0, pg target 0.000761653976902456 quantized to 16 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Jan 20 19:04:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:34 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:04:34 compute-0 ceph-mgr[75417]: [progress INFO root] Writing back 6 completed events
Jan 20 19:04:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 19:04:34 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:34 compute-0 lvm[98076]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:04:34 compute-0 lvm[98073]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:04:34 compute-0 lvm[98076]: VG ceph_vg1 finished
Jan 20 19:04:34 compute-0 lvm[98073]: VG ceph_vg0 finished
Jan 20 19:04:34 compute-0 lvm[98078]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:04:34 compute-0 lvm[98078]: VG ceph_vg2 finished
Jan 20 19:04:34 compute-0 strange_cohen[97997]: {}
Jan 20 19:04:34 compute-0 systemd[1]: libpod-992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c.scope: Deactivated successfully.
Jan 20 19:04:34 compute-0 podman[97981]: 2026-01-20 19:04:34.9225554 +0000 UTC m=+1.040454106 container died 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:04:34 compute-0 systemd[1]: libpod-992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c.scope: Consumed 1.386s CPU time.
Jan 20 19:04:34 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:04:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-0240a2fc6786c1081cb6d7e7fd5966321c6d53096f79e03e31162d58bcc38bf9-merged.mount: Deactivated successfully.
Jan 20 19:04:34 compute-0 podman[97981]: 2026-01-20 19:04:34.973635201 +0000 UTC m=+1.091533907 container remove 992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=strange_cohen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 20 19:04:34 compute-0 systemd[1]: libpod-conmon-992b425e9d499f4fbebefde61366ad33104dfb19e405ea191b6627071590af3c.scope: Deactivated successfully.
Jan 20 19:04:35 compute-0 sudo[97902]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:04:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:04:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:35 compute-0 sudo[98094]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:04:35 compute-0 sudo[98094]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:04:35 compute-0 sudo[98094]: pam_unix(sudo:session): session closed for user root
Jan 20 19:04:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Jan 20 19:04:35 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:35 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:35 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:35 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Jan 20 19:04:35 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Jan 20 19:04:35 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 6066e9c5-f4a0-45bc-962b-469ddb50f4f2 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 20 19:04:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v93: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Jan 20 19:04:36 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:36 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Jan 20 19:04:36 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Jan 20 19:04:36 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=16/16 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=10.294141769s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active pruub 64.257011414s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:36 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=19/20 n=0 ec=16/16 lis/c=19/19 les/c/f=20/20/0 sis=41 pruub=10.294141769s) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown pruub 64.257011414s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:36 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 9a54beab-8989-4d82-84b3-86c0f4c75a04 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 20 19:04:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:36 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:36 compute-0 ceph-mon[75120]: osdmap e40: 3 total, 3 up, 3 in
Jan 20 19:04:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:36 compute-0 ceph-mon[75120]: pgmap v93: 11 pgs: 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Jan 20 19:04:37 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Jan 20 19:04:37 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Jan 20 19:04:37 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev b4978653-4f63-4315-89f5-02dcf0604908 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 20 19:04:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:37 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:37 compute-0 ceph-mon[75120]: osdmap e41: 3 total, 3 up, 3 in
Jan 20 19:04:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:37 compute-0 ceph-mon[75120]: osdmap e42: 3 total, 3 up, 3 in
Jan 20 19:04:37 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=19/20 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1e( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.0( empty local-lis/les=41/42 n=0 ec=16/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.10( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.e( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.12( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.14( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 42 pg[2.1a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=19/19 les/c/f=20/20/0 sis=41) [2] r=0 lpr=41 pi=[19,41)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v96: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:37 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:37 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:38 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Jan 20 19:04:38 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Jan 20 19:04:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Jan 20 19:04:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Jan 20 19:04:38 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Jan 20 19:04:38 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 56f990ca-20a0-4b2e-91fa-c3bf7841ed6a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 20 19:04:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0)
Jan 20 19:04:38 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=8.025634766s) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active pruub 74.880065918s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 20 19:04:38 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43 pruub=8.025634766s) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown pruub 74.880065918s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:38 compute-0 ceph-mon[75120]: pgmap v96: 42 pgs: 31 unknown, 11 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:38 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:38 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:38 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:38 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:38 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:38 compute-0 ceph-mon[75120]: osdmap e43: 3 total, 3 up, 3 in
Jan 20 19:04:38 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} : dispatch
Jan 20 19:04:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:38 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43 pruub=13.832959175s) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active pruub 76.604873657s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:38 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 43 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43 pruub=13.832959175s) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown pruub 76.604873657s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Jan 20 19:04:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 20 19:04:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Jan 20 19:04:39 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Jan 20 19:04:39 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev cd6e8f66-be8a-47c9-9669-4ce163426a84 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 20 19:04:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1c( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1a( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.b( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.2( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.d( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.10( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.13( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.14( empty local-lis/les=17/18 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=19/20 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.6( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.19( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.3( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.0( empty local-lis/les=43/44 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.19( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.0( empty local-lis/les=43/44 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.2( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.4( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.15( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.17( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.16( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 44 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=19/19 les/c/f=20/20/0 sis=43) [0] r=0 lpr=43 pi=[19,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.10( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.13( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 44 pg[3.14( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=17/17 les/c/f=18/18/0 sis=43) [1] r=0 lpr=43 pi=[17,43)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:39 compute-0 ceph-mon[75120]: 2.1f scrub starts
Jan 20 19:04:39 compute-0 ceph-mon[75120]: 2.1f scrub ok
Jan 20 19:04:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Jan 20 19:04:39 compute-0 ceph-mon[75120]: osdmap e44: 3 total, 3 up, 3 in
Jan 20 19:04:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:39 compute-0 ceph-mgr[75417]: [progress WARNING root] Starting Global Recovery Event,94 pgs not in active + clean state
Jan 20 19:04:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v99: 104 pgs: 1 peering, 93 unknown, 10 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0)
Jan 20 19:04:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 20 19:04:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:39 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Jan 20 19:04:39 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Jan 20 19:04:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Jan 20 19:04:40 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:40 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 20 19:04:40 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Jan 20 19:04:40 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Jan 20 19:04:40 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 0514ce63-36b2-4d6e-8aac-0f594bbf516e (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 20 19:04:40 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=8.028918266s) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active pruub 66.017578125s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:40 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:40 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 45 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45 pruub=8.028918266s) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown pruub 66.017578125s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:40 compute-0 ceph-mon[75120]: pgmap v99: 104 pgs: 1 peering, 93 unknown, 10 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} : dispatch
Jan 20 19:04:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Jan 20 19:04:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:40 compute-0 ceph-mon[75120]: osdmap e45: 3 total, 3 up, 3 in
Jan 20 19:04:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Jan 20 19:04:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Jan 20 19:04:41 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Jan 20 19:04:41 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev a5b1dfa5-4905-495a-a966-243bf036b660 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 20 19:04:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 45 pg[6.0( v 39'39 (0'0,39'39] local-lis/les=22/23 n=22 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45 pruub=8.031952858s) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 39'38 mlcod 39'38 active pruub 77.905471802s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.0( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45 pruub=8.031952858s) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 39'38 mlcod 0'0 unknown pruub 77.905471802s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 46 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=22/23 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=21/22 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.17( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.10( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.8( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.0( empty local-lis/les=45/46 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.6( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1b( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 46 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=21/21 les/c/f=22/22/0 sis=45) [2] r=0 lpr=45 pi=[21,45)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:41 compute-0 ceph-mon[75120]: 4.1e scrub starts
Jan 20 19:04:41 compute-0 ceph-mon[75120]: 4.1e scrub ok
Jan 20 19:04:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:41 compute-0 ceph-mon[75120]: osdmap e46: 3 total, 3 up, 3 in
Jan 20 19:04:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v102: 150 pgs: 1 peering, 77 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Jan 20 19:04:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Jan 20 19:04:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Jan 20 19:04:42 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:42 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:42 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Jan 20 19:04:42 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Jan 20 19:04:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=8.024555206s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active pruub 74.649314880s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=31/32 n=6 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47 pruub=12.445550919s) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 32'5 active pruub 79.070503235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:42 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 60fb2f38-22e0-40bc-9722-d91203be4961 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 20 19:04:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:42 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[7.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47 pruub=8.024555206s) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown pruub 74.649314880s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 47 pg[8.0( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47 pruub=12.445550919s) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 0'0 unknown pruub 79.070503235s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.0( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=22/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 39'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 47 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=22/22 les/c/f=23/23/0 sis=45) [0] r=0 lpr=45 pi=[22,45)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:42 compute-0 ceph-mon[75120]: pgmap v102: 150 pgs: 1 peering, 77 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:42 compute-0 ceph-mon[75120]: 3.1f scrub starts
Jan 20 19:04:42 compute-0 ceph-mon[75120]: 3.1f scrub ok
Jan 20 19:04:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:42 compute-0 ceph-mon[75120]: osdmap e47: 3 total, 3 up, 3 in
Jan 20 19:04:42 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Jan 20 19:04:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Jan 20 19:04:43 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Jan 20 19:04:43 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Jan 20 19:04:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Jan 20 19:04:43 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Jan 20 19:04:43 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Jan 20 19:04:43 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 8b5a242f-11de-4476-ad09-8e23a44afd16 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.18( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.19( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.5( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.7( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.9( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.3( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.a( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.e( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.8( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.13( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.10( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.17( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.16( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.14( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=31/32 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=23/24 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.12( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.10( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.17( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.16( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.19( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.5( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.7( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.0( empty local-lis/les=47/48 n=0 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.0( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=31/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 32'5 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.1( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.14( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.3( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.7( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.13( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.8( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1d( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.17( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.16( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.19( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=31/31 les/c/f=32/32/0 sis=47) [1] r=0 lpr=47 pi=[31,47)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 48 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=23/23 les/c/f=24/24/0 sis=47) [1] r=0 lpr=47 pi=[23,47)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:43 compute-0 ceph-mon[75120]: 3.18 scrub starts
Jan 20 19:04:43 compute-0 ceph-mon[75120]: 2.1e scrub starts
Jan 20 19:04:43 compute-0 ceph-mon[75120]: 2.1e scrub ok
Jan 20 19:04:43 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:43 compute-0 ceph-mon[75120]: osdmap e48: 3 total, 3 up, 3 in
Jan 20 19:04:43 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} : dispatch
Jan 20 19:04:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v105: 212 pgs: 1 peering, 139 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:43 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:43 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Jan 20 19:04:43 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Jan 20 19:04:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Jan 20 19:04:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Jan 20 19:04:44 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] update: starting ev 16835fd2-3657-4224-ab73-ad40c0663e80 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 6066e9c5-f4a0-45bc-962b-469ddb50f4f2 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 6066e9c5-f4a0-45bc-962b-469ddb50f4f2 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 9 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 9a54beab-8989-4d82-84b3-86c0f4c75a04 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 9a54beab-8989-4d82-84b3-86c0f4c75a04 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 8 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev b4978653-4f63-4315-89f5-02dcf0604908 (PG autoscaler increasing pool 4 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event b4978653-4f63-4315-89f5-02dcf0604908 (PG autoscaler increasing pool 4 PGs from 1 to 32) in 7 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 56f990ca-20a0-4b2e-91fa-c3bf7841ed6a (PG autoscaler increasing pool 5 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 56f990ca-20a0-4b2e-91fa-c3bf7841ed6a (PG autoscaler increasing pool 5 PGs from 1 to 32) in 6 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev cd6e8f66-be8a-47c9-9669-4ce163426a84 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event cd6e8f66-be8a-47c9-9669-4ce163426a84 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 5 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 0514ce63-36b2-4d6e-8aac-0f594bbf516e (PG autoscaler increasing pool 7 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 0514ce63-36b2-4d6e-8aac-0f594bbf516e (PG autoscaler increasing pool 7 PGs from 1 to 32) in 4 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev a5b1dfa5-4905-495a-a966-243bf036b660 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event a5b1dfa5-4905-495a-a966-243bf036b660 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 3 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 60fb2f38-22e0-40bc-9722-d91203be4961 (PG autoscaler increasing pool 9 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 60fb2f38-22e0-40bc-9722-d91203be4961 (PG autoscaler increasing pool 9 PGs from 1 to 32) in 2 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 8b5a242f-11de-4476-ad09-8e23a44afd16 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 49 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=33/34 n=210 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=12.445308685s) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 39'482 active pruub 81.086898804s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 8b5a242f-11de-4476-ad09-8e23a44afd16 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 1 seconds
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] complete: finished ev 16835fd2-3657-4224-ab73-ad40c0663e80 (PG autoscaler increasing pool 11 PGs from 1 to 32)
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 16835fd2-3657-4224-ab73-ad40c0663e80 (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Jan 20 19:04:44 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 49 pg[9.0( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49 pruub=12.445308685s) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 0'0 unknown pruub 81.086898804s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057880 space 0x5614da73d440 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db056000 space 0x5614db55cb40 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3380 space 0x5614da35f740 0x0~98 clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db055d80 space 0x5614da73ae40 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057f80 space 0x5614db731440 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057b80 space 0x5614da721440 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139880 space 0x5614db5b2540 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3b00 space 0x5614db446b40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139300 space 0x5614da6d3140 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3f80 space 0x5614db4c6b40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139f80 space 0x5614dc5bee40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139680 space 0x5614db537d40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079180 space 0x5614db72e540 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db13c180 space 0x5614dc5be540 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079800 space 0x5614db0a6e40 0x0~98 clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3300 space 0x5614da688b40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db110180 space 0x5614db72f140 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12a680 space 0x5614da91c540 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db131c80 space 0x5614db5fb140 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139a00 space 0x5614db5b3d40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2580 space 0x5614db4d8e40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2c80 space 0x5614da689440 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139180 space 0x5614da751440 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3680 space 0x5614da688240 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db078900 space 0x5614db534840 0x0~98 clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139c00 space 0x5614da6d2840 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db055a80 space 0x5614da745740 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db11bb00 space 0x5614da91d740 0x0~98 clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12a500 space 0x5614db730840 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12af00 space 0x5614da35e840 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079600 space 0x5614db0a7a40 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2280 space 0x5614db447440 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db139100 space 0x5614db72f740 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079780 space 0x5614db55ae40 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614da2c3880 space 0x5614db5b3740 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db056180 space 0x5614da7a8240 0x0~98 clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3d80 space 0x5614db4c6240 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db131980 space 0x5614da96e840 0x0~98 clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3d00 space 0x5614da720b40 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057600 space 0x5614da68c240 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614da2c1900 space 0x5614db444540 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3880 space 0x5614db4c7440 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057980 space 0x5614da73cb40 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db138200 space 0x5614da750b40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db110f80 space 0x5614da784e40 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057900 space 0x5614db55d440 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db079680 space 0x5614da7a9d40 0x0~98 clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db11af00 space 0x5614db536e40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614da92de00 space 0x5614da73a240 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12a480 space 0x5614db4d9a40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0d2080 space 0x5614da720540 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2780 space 0x5614db4d8540 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12ae00 space 0x5614da744240 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db11b000 space 0x5614da6d3a40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db055700 space 0x5614da726240 0x0~9a clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e2080 space 0x5614db447d40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db0e3b80 space 0x5614db5b2e40 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db057580 space 0x5614db55e840 0x0~98 clean)
Jan 20 19:04:44 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1).collection(9.0_head 0x5614dc075b00) split_cache   moving buffer(0x5614db12a200 space 0x5614dc5bf740 0x0~6e clean)
Jan 20 19:04:44 compute-0 ceph-mon[75120]: 3.18 scrub ok
Jan 20 19:04:44 compute-0 ceph-mon[75120]: pgmap v105: 212 pgs: 1 peering, 139 unknown, 72 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:44 compute-0 ceph-mon[75120]: 3.19 scrub starts
Jan 20 19:04:44 compute-0 ceph-mon[75120]: 3.19 scrub ok
Jan 20 19:04:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Jan 20 19:04:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:44 compute-0 ceph-mon[75120]: osdmap e49: 3 total, 3 up, 3 in
Jan 20 19:04:44 compute-0 ceph-mgr[75417]: [progress INFO root] Writing back 16 completed events
Jan 20 19:04:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 19:04:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Jan 20 19:04:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Jan 20 19:04:44 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 49 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=35/36 n=9 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49 pruub=13.920217514s) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 39'17 active pruub 76.470893860s@ mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:44 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 49 pg[10.0( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49 pruub=13.920217514s) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 0'0 unknown pruub 76.470893860s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Jan 20 19:04:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Jan 20 19:04:45 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.14( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.10( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.12( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.2( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.5( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1a( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.18( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.12( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.10( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.11( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.19( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.18( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.7( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.5( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.6( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.4( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.3( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.f( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.8( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.9( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.b( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.c( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.a( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.e( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.2( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.13( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.4( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=33/34 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.14( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.10( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.14( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.12( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.15( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.16( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.17( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.d( v 39'18 lc 0'0 (0'0,39'18] local-lis/les=35/36 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.12( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1d( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.0( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 39'482 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.2( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1c( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.18( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.5( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.0( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 39'17 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.3( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.9( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.c( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.a( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1a( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.18( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.4( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.5( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.14( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.15( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.d( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 50 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=35/35 les/c/f=36/36/0 sis=49) [2] r=0 lpr=49 pi=[35,49)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 50 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=33/33 les/c/f=34/34/0 sis=49) [1] r=0 lpr=49 pi=[33,49)/1 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:45 compute-0 ceph-mon[75120]: 3.7 scrub starts
Jan 20 19:04:45 compute-0 ceph-mon[75120]: 3.7 scrub ok
Jan 20 19:04:45 compute-0 ceph-mon[75120]: osdmap e50: 3 total, 3 up, 3 in
Jan 20 19:04:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v108: 274 pgs: 1 peering, 62 unknown, 211 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Jan 20 19:04:45 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Jan 20 19:04:46 compute-0 ceph-mon[75120]: pgmap v108: 274 pgs: 1 peering, 62 unknown, 211 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} : dispatch
Jan 20 19:04:46 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Jan 20 19:04:46 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Jan 20 19:04:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v110: 305 pgs: 1 peering, 93 unknown, 211 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Jan 20 19:04:47 compute-0 ceph-mon[75120]: osdmap e51: 3 total, 3 up, 3 in
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=12.700772285s) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active pruub 85.138977051s@ mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 51 pg[11.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51 pruub=12.700772285s) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown pruub 85.138977051s@ mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e51 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Jan 20 19:04:48 compute-0 ceph-mon[75120]: pgmap v110: 305 pgs: 1 peering, 93 unknown, 211 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Jan 20 19:04:48 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=37/38 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.16( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.13( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.0( empty local-lis/les=51/52 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.5( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.7( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 52 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=37/37 les/c/f=38/38/0 sis=51) [1] r=0 lpr=51 pi=[37,51)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:48 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Jan 20 19:04:48 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Jan 20 19:04:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v112: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:49 compute-0 ceph-mon[75120]: osdmap e52: 3 total, 3 up, 3 in
Jan 20 19:04:49 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1a scrub starts
Jan 20 19:04:49 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1a scrub ok
Jan 20 19:04:49 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Jan 20 19:04:49 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Jan 20 19:04:50 compute-0 systemd[76564]: Starting Mark boot as successful...
Jan 20 19:04:50 compute-0 systemd[76564]: Finished Mark boot as successful.
Jan 20 19:04:50 compute-0 ceph-mon[75120]: 4.1c scrub starts
Jan 20 19:04:50 compute-0 ceph-mon[75120]: 4.1c scrub ok
Jan 20 19:04:50 compute-0 ceph-mon[75120]: pgmap v112: 305 pgs: 1 peering, 31 unknown, 273 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:50 compute-0 ceph-mon[75120]: 3.1a scrub starts
Jan 20 19:04:50 compute-0 ceph-mon[75120]: 3.1a scrub ok
Jan 20 19:04:51 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Jan 20 19:04:51 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Jan 20 19:04:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v113: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Jan 20 19:04:51 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764896393s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.867973328s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.768498421s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871620178s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.787466049s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.890731812s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.768334389s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871620178s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.787409782s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.890731812s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.767777443s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871459961s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.767750740s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871459961s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764649391s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.867973328s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.784059525s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.890701294s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.784017563s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.890701294s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764866829s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871673584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764767647s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871673584s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764997482s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871994019s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764970779s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871994019s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.766935349s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874229431s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.766908646s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874229431s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764303207s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871833801s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764279366s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871833801s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.786118507s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.893867493s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.786084175s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.893867493s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.786574364s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.893852234s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785934448s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.893852234s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763784409s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.871849060s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763747215s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.871849060s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.766010284s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874153137s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785912514s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.894157410s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.765964508s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874153137s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785885811s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785758972s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.894165039s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.765930176s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874359131s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763663292s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872108459s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785725594s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894165039s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763641357s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872108459s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.765893936s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874359131s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785747528s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.894439697s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785728455s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894439697s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763573647s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872352600s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763344765s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872116089s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785593987s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 active pruub 94.894432068s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53 pruub=14.785578728s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894432068s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763516426s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872352600s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763413429s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872367859s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763390541s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872367859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763951302s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.873092651s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763233185s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.872367859s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763247490s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872116089s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763934135s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.873092651s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763196945s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.872367859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764789581s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874176025s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763873100s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.873283386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764762878s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874176025s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.763821602s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.873283386s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764575958s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874061584s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764553070s) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874061584s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764651299s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 91.874244690s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.764633179s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 91.874244690s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-mon[75120]: 4.1d scrub starts
Jan 20 19:04:51 compute-0 ceph-mon[75120]: 4.1d scrub ok
Jan 20 19:04:51 compute-0 ceph-mon[75120]: 2.9 scrub starts
Jan 20 19:04:51 compute-0 ceph-mon[75120]: 2.9 scrub ok
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.10( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.12( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.14( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.18( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.8( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.1b( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.1a( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.9( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.5( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.e( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.1( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.7( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.1( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.a( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.13( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.d( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.f( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.11( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[4.1c( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.4( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.789543152s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.032371521s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.789494514s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.032371521s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[4.2( empty local-lis/les=0/0 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964189529s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.827888489s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758675575s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.004310608s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964165688s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.827888489s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780668259s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644523621s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.17( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.762928963s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009040833s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.762892723s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009040833s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736981392s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983207703s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785997391s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.032333374s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785974503s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.032333374s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.790488243s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037017822s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736595154s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983146667s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736571312s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983146667s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785683632s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.032432556s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785655975s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.032432556s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736198425s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983116150s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736177444s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983116150s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.761826515s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008903503s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.1b( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.761796951s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008903503s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.14( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.1f( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.735393524s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983123779s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.735364914s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983123779s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.761089325s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009010315s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.761060715s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009010315s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.1e( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.760754585s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008918762s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.760728836s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008918762s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.734626770s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983070374s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.734597206s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983070374s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.760367393s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008987427s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.760339737s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008987427s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780646324s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644523621s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788419724s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037208557s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788397789s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037208557s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758645058s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.004310608s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780597687s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.644500732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.734041214s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983276367s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.759681702s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008995056s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.759655952s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008995056s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.1e( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.16( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.733540535s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983055115s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.733518600s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983055115s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787724495s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037277222s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787688255s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037277222s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.14( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.759292603s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.008972168s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.759272575s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.008972168s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787429810s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037322998s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787406921s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037322998s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.733282089s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983276367s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787016869s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037391663s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786974907s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037391663s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.732527733s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983039856s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.732491493s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983039856s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758346558s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009094238s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758312225s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009094238s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.13( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786029816s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037376404s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785999298s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037376404s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.731573105s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983001709s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.731555939s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.983016968s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.731534958s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983001709s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.18( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.731522560s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983016968s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758448601s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.010025024s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.758414268s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.010025024s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785807610s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037528992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785775185s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037528992s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.736899376s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.983207703s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757900238s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009811401s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.14( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757874489s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009811401s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.790459633s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037017822s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785063744s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037483215s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.730375290s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982833862s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.730344772s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982833862s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.1f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785021782s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037483215s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.730322838s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982826233s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.730132103s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982826233s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757175446s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009902954s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757140160s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009902954s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.15( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.1b( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780577660s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.644500732s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785690308s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.649703979s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785670280s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.649703979s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784358025s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.037574768s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.11( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784296036s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.037574768s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756456375s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.009887695s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784154892s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.037628174s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.729422569s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982917786s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756424904s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.009887695s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784133911s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.037628174s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.729400635s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982917786s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.729257584s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982788086s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.729220390s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982788086s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756373405s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.010078430s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756349564s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.010078430s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728921890s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982780457s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756165504s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.010063171s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756142616s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.010063171s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728890419s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982780457s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788300514s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.042434692s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757574081s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011749268s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728663445s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982841492s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.757555008s) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011749268s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728637695s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982841492s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.10( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787322998s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.041717529s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728191376s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982749939s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788058281s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.042434692s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787138939s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.041717529s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.728160858s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982749939s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787509918s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.042221069s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727574348s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982307434s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727550507s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982307434s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787473679s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.042221069s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.7( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.18( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786794662s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.041732788s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786738396s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.041732788s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786662102s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.041809082s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786642075s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.041809082s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727453232s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982788086s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727432251s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982788086s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756210327s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011566162s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727087021s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982635498s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786172867s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.041740417s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.727060318s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982635498s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.756165504s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011566162s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786118507s) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.041740417s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786162376s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 active pruub 79.041816711s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786121368s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY pruub 79.041816711s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.755785942s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011741638s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785891533s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.041847229s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.726333618s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982315063s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.755758286s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011741638s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.f( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.726308823s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982315063s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785861015s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.041847229s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.755560875s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011718750s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.755541801s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011718750s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.721095085s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.977561951s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786021233s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 active pruub 79.042503357s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.721063614s) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.977561951s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785986900s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY pruub 79.042503357s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.10( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.725831032s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 active pruub 78.982559204s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53 pruub=9.725807190s) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY pruub 78.982559204s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.1e( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.4( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.b( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.8( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.1a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.15( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.1d( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.f( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.15( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.751092911s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615264893s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.7( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.11( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.8( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.11( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.752708435s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 active pruub 83.011756897s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53 pruub=13.752679825s) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY pruub 83.011756897s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.751070023s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615264893s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.18( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.3( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.19( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.1c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780358315s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644607544s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780264854s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.644500732s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746891975s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.611129761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780333519s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644607544s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.9( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.12( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.966338158s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.830680847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.5( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.780205727s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.644500732s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.966318130s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.830680847s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.6( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.2( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.2( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.752981186s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617393494s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.d( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746796608s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.611129761s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.752961159s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617393494s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787708282s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652297974s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.d( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.1( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.5( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.12( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787687302s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652297974s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.966114998s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.830795288s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.b( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779447556s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644233704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.5( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965987206s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.830795288s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.c( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.4( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.c( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.8( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.3( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.e( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[5.2( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779420853s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644233704s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779077530s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644195557s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.2( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.d( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.750434875s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615562439s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779045105s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644195557s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.750388145s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615562439s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779064178s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.644477844s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.779027939s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.644477844s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787136078s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652626038s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.3( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.9( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.e( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.2( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.8( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965316772s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.830879211s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.8( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.3( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.787094116s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652626038s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.e( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.1( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.1( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.a( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965293884s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.830879211s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.778452873s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.644050598s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.e( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.1c( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.15( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.778429985s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.644050598s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.4( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.1d( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.18( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.7( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.f( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965420723s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831054688s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.1b( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965401649s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831054688s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965251923s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831024170s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.11( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.749588966s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615371704s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1a( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.749567032s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615371704s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.965233803s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831024170s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.16( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786496162s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652381897s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.778088570s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.644058228s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1b( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1c( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.11( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1e( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[3.16( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[2.1f( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786456108s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652381897s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[11.1f( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.f( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[8.1c( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.778070450s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.644058228s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964951515s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.830978394s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[10.17( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964932442s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.830978394s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777845383s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643966675s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 53 pg[7.15( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777825356s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643966675s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.749189377s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615432739s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786195755s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652458191s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.4( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.b( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.6( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786175728s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652458191s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964683533s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831100464s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777526855s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643943787s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964662552s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831100464s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777509689s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643943787s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748828888s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615470886s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748806953s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615470886s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777205467s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.643989563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.777182579s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.643989563s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776851654s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.643798828s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776831627s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.643798828s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.a( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.9( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.1( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.9( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.4( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964165688s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831153870s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.964144707s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831153870s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776750565s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643852234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748430252s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615554810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776727676s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643852234s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748412132s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615554810s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776565552s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.643806458s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785219193s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652481079s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776546478s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.643806458s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785199165s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652481079s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776668549s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.643989563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.776627541s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.643989563s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963844299s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831245422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748086929s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615570068s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.963767052s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831245422s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748064041s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615570068s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784809113s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652488708s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784788132s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652488708s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.c( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.9( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.6( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.6( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.f( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.775791168s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643661499s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.775772095s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643661499s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.747633934s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615554810s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[11.19( empty local-lis/les=0/0 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.1a( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.12( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.747612000s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615554810s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784606934s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.652587891s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784584045s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.652587891s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.775033951s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643058777s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.775012970s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643058777s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748167038s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.616256714s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.748147011s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.616256714s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962833405s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831268311s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962810516s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831268311s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.1f( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962690353s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831321716s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962669373s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831321716s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.774422646s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.643180847s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.774404526s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.643180847s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788712502s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657707214s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962219238s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831245422s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.788688660s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657707214s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.747247696s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615432739s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.962195396s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831245422s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773546219s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642776489s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773426056s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642776489s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773351669s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642852783s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773334503s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642852783s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773325920s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642921448s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773024559s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642700195s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.15( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.1d( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[8.18( empty local-lis/les=0/0 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773303032s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642921448s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.773001671s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642700195s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772718430s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642684937s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772699356s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642684937s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745589256s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.615661621s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745562553s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.615661621s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772440910s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642616272s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772425652s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642616272s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961070061s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831382751s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961057663s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831382751s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745681763s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.616119385s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[3.17( empty local-lis/les=0/0 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745664597s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.616119385s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 53 pg[7.13( empty local-lis/les=0/0 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.772005081s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642562866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771986961s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642562866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786767960s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657432556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960700989s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831405640s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786745071s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657432556s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771851540s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642578125s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960680962s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831405640s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771830559s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642578125s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745315552s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.616165161s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960508347s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831352234s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745295525s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.616165161s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960484505s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831352234s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771564484s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642524719s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771542549s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642524719s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771504402s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642517090s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786661148s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657684326s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786640167s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657684326s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771484375s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642517090s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960370064s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831489563s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960350037s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831489563s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771395683s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642555237s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746199608s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617378235s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771377563s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642555237s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746182442s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617378235s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746048927s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617370605s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786839485s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 39'483 active pruub 85.658187866s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.746029854s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617370605s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771099091s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642494202s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.786801338s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 39'483 unknown NOTIFY pruub 85.658187866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.771081924s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642494202s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960078239s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831542969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960051537s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831542969s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770812035s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642379761s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770791054s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642379761s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.959890366s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.831542969s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.959867477s) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.831542969s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745687485s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617469788s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770328522s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.642173767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770303726s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.642173767s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745657921s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617469788s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961028099s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.833045959s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.961006165s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.833045959s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745185852s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617462158s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.745160103s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617462158s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960451126s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.832916260s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785326958s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657814026s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785302162s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657814026s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960332870s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.832969666s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960268974s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.832916260s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.960315704s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.832969666s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.785018921s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657798767s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784984589s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657798767s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.768787384s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.641769409s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.768755913s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.641769409s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784722328s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.657867432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.784701347s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.657867432s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.769007683s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642257690s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.768991470s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642257690s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.744286537s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617675781s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.744267464s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617675781s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.959568024s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.833099365s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.959545135s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.833099365s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758837700s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.641807556s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758797646s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.641807556s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758874893s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.641998291s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.743901253s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617546082s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.774927139s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 active pruub 85.658134460s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.734324455s) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617546082s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.949831963s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 active pruub 88.833076477s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=53 pruub=9.774906158s) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 85.658134460s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53 pruub=12.949810028s) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY pruub 88.833076477s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758847237s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.641998291s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.734236717s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 active pruub 87.617637634s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53 pruub=11.734215736s) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY pruub 87.617637634s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.755042076s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 active pruub 91.638595581s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.755023003s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY pruub 91.638595581s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758173943s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.641777039s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758000374s) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.641777039s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.770358086s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 active pruub 91.642242432s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53 pruub=15.758327484s) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY pruub 91.642242432s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.12( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.10( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.17( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.11( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.15( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.12( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.13( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.1a( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.1d( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.19( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.16( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.6( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.9( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.d( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.c( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.11( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.7( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.f( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.f( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.3( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.b( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.5( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.4( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.9( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.2( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.13( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.6( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.1b( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.1( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[10.14( empty local-lis/les=0/0 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.1a( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.19( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[2.a( empty local-lis/les=0/0 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:51 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 53 pg[5.18( empty local-lis/les=0/0 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Jan 20 19:04:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Jan 20 19:04:52 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Jan 20 19:04:52 compute-0 ceph-mon[75120]: pgmap v113: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:04:52 compute-0 ceph-mon[75120]: osdmap e53: 3 total, 3 up, 3 in
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.11( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.5( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.9( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.3( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1d( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1b( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[9.1( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.14( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.11( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.17( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.15( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.12( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.16( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.d( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.5( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.13( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.3( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.a( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.c( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.4( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.9( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.7( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.1( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.f( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.6( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.9( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.12( v 50'19 lc 39'17 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.1a( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[10.14( v 50'19 lc 36'7 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[2.1b( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.19( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.1d( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.16( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.8( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.3( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.e( v 50'19 lc 36'4 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.15( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.2( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.1f( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.2( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.5( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.f( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.1c( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.4( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.1d( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.d( v 50'19 lc 36'5 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.7( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.9( v 50'19 lc 36'8 (0'0,50'19] local-lis/les=53/54 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.18( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.b( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.19( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[5.1e( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.10( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.1f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1b( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.4( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.15( v 50'19 lc 36'3 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.4( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.c( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.11( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.18( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.14( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.9( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[2.13( empty local-lis/les=53/54 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=0 lpr=53 pi=[41,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.3( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.6( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.6( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.3( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.6( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.2( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.4( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.f( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.d( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.7( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.5( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.9( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[5.18( empty local-lis/les=53/54 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.8( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.14( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.12( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[4.10( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.f( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.1( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.9( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.17( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.13( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:52 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e54 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v116: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 19:04:53 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 20 19:04:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Jan 20 19:04:53 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 20 19:04:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Jan 20 19:04:53 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 19:04:53 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 19:04:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Jan 20 19:04:53 compute-0 ceph-mon[75120]: osdmap e54: 3 total, 3 up, 3 in
Jan 20 19:04:53 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 20 19:04:53 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} : dispatch
Jan 20 19:04:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636933327s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894454956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636891365s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894454956s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635606766s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894157410s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635582924s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635617256s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894279480s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635570526s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894279480s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635735512s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894470215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:53 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635711670s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894470215s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:53 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:53 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:54 compute-0 ceph-mgr[75417]: [progress INFO root] Completed event 7d2393c7-bd7a-4697-bc54-3febf0a0d0e3 (Global Recovery Event) in 15 seconds
Jan 20 19:04:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Jan 20 19:04:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Jan 20 19:04:54 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080907822s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.078788757s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080473900s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.078704834s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080650330s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.079368591s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:54 compute-0 ceph-mon[75120]: pgmap v116: 305 pgs: 305 active+clean; 461 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:04:54 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 19:04:54 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Jan 20 19:04:54 compute-0 ceph-mon[75120]: osdmap e55: 3 total, 3 up, 3 in
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079759598s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.079399109s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:54 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:54 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:54 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:54 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:54 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:54 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:54 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:54 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:54 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v119: 305 pgs: 2 active+recovery_wait, 12 active+recovery_wait+remapped, 3 active+recovery_wait+degraded, 8 peering, 1 active+recovering, 279 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 82/249 objects misplaced (32.932%); 517 B/s, 2 keys/s, 8 objects/s recovering
Jan 20 19:04:55 compute-0 ceph-mon[75120]: log_channel(cluster) log [WRN] : Health check failed: Degraded data redundancy: 4/249 objects degraded (1.606%), 3 pgs degraded (PG_DEGRADED)
Jan 20 19:04:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Jan 20 19:04:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Jan 20 19:04:55 compute-0 ceph-mon[75120]: osdmap e56: 3 total, 3 up, 3 in
Jan 20 19:04:55 compute-0 ceph-mon[75120]: pgmap v119: 305 pgs: 2 active+recovery_wait, 12 active+recovery_wait+remapped, 3 active+recovery_wait+degraded, 8 peering, 1 active+recovering, 279 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 82/249 objects misplaced (32.932%); 517 B/s, 2 keys/s, 8 objects/s recovering
Jan 20 19:04:55 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=50'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053964615s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.078872681s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054486275s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079605103s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054390907s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079986572s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054670334s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 active pruub 94.079673767s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053636551s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079513550s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053482056s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079612732s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053318024s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079582214s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053416252s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079689026s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053354263s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079818726s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053389549s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079895020s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053203583s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079658508s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053125381s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079887390s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:55 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Jan 20 19:04:56 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Jan 20 19:04:56 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Jan 20 19:04:56 compute-0 ceph-mon[75120]: Health check failed: Degraded data redundancy: 4/249 objects degraded (1.606%), 3 pgs degraded (PG_DEGRADED)
Jan 20 19:04:56 compute-0 ceph-mon[75120]: osdmap e57: 3 total, 3 up, 3 in
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=55'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:56 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:04:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v122: 305 pgs: 2 active+recovery_wait, 12 active+recovery_wait+remapped, 3 active+recovery_wait+degraded, 8 peering, 1 active+recovering, 279 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 82/249 objects misplaced (32.932%); 517 B/s, 2 keys/s, 8 objects/s recovering
Jan 20 19:04:57 compute-0 ceph-mon[75120]: osdmap e58: 3 total, 3 up, 3 in
Jan 20 19:04:57 compute-0 ceph-mon[75120]: pgmap v122: 305 pgs: 2 active+recovery_wait, 12 active+recovery_wait+remapped, 3 active+recovery_wait+degraded, 8 peering, 1 active+recovering, 279 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 4/249 objects degraded (1.606%); 82/249 objects misplaced (32.932%); 517 B/s, 2 keys/s, 8 objects/s recovering
Jan 20 19:04:57 compute-0 sshd-session[98120]: Invalid user banxgg from 45.148.10.240 port 59824
Jan 20 19:04:58 compute-0 sshd-session[98120]: Connection closed by invalid user banxgg 45.148.10.240 port 59824 [preauth]
Jan 20 19:04:58 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 20 19:04:58 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 20 19:04:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:04:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 20 19:04:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 20 19:04:58 compute-0 ceph-mon[75120]: 2.1a scrub starts
Jan 20 19:04:58 compute-0 ceph-mon[75120]: 2.1a scrub ok
Jan 20 19:04:58 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 20 19:04:58 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 20 19:04:59 compute-0 ceph-mgr[75417]: [progress INFO root] Writing back 17 completed events
Jan 20 19:04:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Jan 20 19:04:59 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:04:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v123: 305 pgs: 12 active+recovery_wait+remapped, 4 peering, 289 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 82/249 objects misplaced (32.932%); 478 B/s, 1 keys/s, 7 objects/s recovering
Jan 20 19:04:59 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 20 19:04:59 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 20 19:05:00 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 3 pgs degraded)
Jan 20 19:05:00 compute-0 ceph-mon[75120]: log_channel(cluster) log [INF] : Cluster is now healthy
Jan 20 19:05:00 compute-0 ceph-mon[75120]: 11.16 scrub starts
Jan 20 19:05:00 compute-0 ceph-mon[75120]: 11.16 scrub ok
Jan 20 19:05:00 compute-0 ceph-mon[75120]: 4.1f scrub starts
Jan 20 19:05:00 compute-0 ceph-mon[75120]: 4.1f scrub ok
Jan 20 19:05:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:05:00 compute-0 ceph-mon[75120]: pgmap v123: 305 pgs: 12 active+recovery_wait+remapped, 4 peering, 289 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 82/249 objects misplaced (32.932%); 478 B/s, 1 keys/s, 7 objects/s recovering
Jan 20 19:05:00 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 20 19:05:00 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 20 19:05:01 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 20 19:05:01 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 20 19:05:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v124: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 906 B/s, 17 objects/s recovering
Jan 20 19:05:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 20 19:05:01 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 20 19:05:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Jan 20 19:05:01 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 20 19:05:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Jan 20 19:05:01 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 19:05:01 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 19:05:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Jan 20 19:05:01 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Jan 20 19:05:01 compute-0 ceph-mon[75120]: 4.6 scrub starts
Jan 20 19:05:01 compute-0 ceph-mon[75120]: 4.6 scrub ok
Jan 20 19:05:01 compute-0 ceph-mon[75120]: Health check cleared: PG_DEGRADED (was: Degraded data redundancy: 4/249 objects degraded (1.606%), 3 pgs degraded)
Jan 20 19:05:01 compute-0 ceph-mon[75120]: Cluster is now healthy
Jan 20 19:05:01 compute-0 ceph-mon[75120]: 5.1f scrub starts
Jan 20 19:05:01 compute-0 ceph-mon[75120]: 5.1f scrub ok
Jan 20 19:05:01 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 20 19:05:01 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} : dispatch
Jan 20 19:05:02 compute-0 ceph-mon[75120]: 7.19 scrub starts
Jan 20 19:05:02 compute-0 ceph-mon[75120]: 7.19 scrub ok
Jan 20 19:05:02 compute-0 ceph-mon[75120]: pgmap v124: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 906 B/s, 17 objects/s recovering
Jan 20 19:05:02 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 19:05:02 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Jan 20 19:05:02 compute-0 ceph-mon[75120]: osdmap e59: 3 total, 3 up, 3 in
Jan 20 19:05:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707896233s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897071838s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707865715s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897239685s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707477570s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897384644s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707047462s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897460938s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:02 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:02 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:02 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:02 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:02 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:03 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 20 19:05:03 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 20 19:05:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e59 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v126: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 540 B/s, 11 objects/s recovering
Jan 20 19:05:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 20 19:05:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 20 19:05:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Jan 20 19:05:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 20 19:05:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Jan 20 19:05:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 19:05:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 19:05:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Jan 20 19:05:03 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Jan 20 19:05:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.851051331s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 active pruub 102.894393921s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.850987434s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894393921s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.849040031s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 active pruub 102.894691467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.848990440s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894691467s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:03 compute-0 ceph-mon[75120]: 10.1f scrub starts
Jan 20 19:05:03 compute-0 ceph-mon[75120]: 10.1f scrub ok
Jan 20 19:05:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 20 19:05:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} : dispatch
Jan 20 19:05:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Jan 20 19:05:04 compute-0 ceph-mon[75120]: pgmap v126: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 540 B/s, 11 objects/s recovering
Jan 20 19:05:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 19:05:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Jan 20 19:05:04 compute-0 ceph-mon[75120]: osdmap e60: 3 total, 3 up, 3 in
Jan 20 19:05:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Jan 20 19:05:04 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Jan 20 19:05:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:04 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:04 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 20 19:05:04 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 20 19:05:05 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 20 19:05:05 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 20 19:05:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v129: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 690 B/s, 1 keys/s, 14 objects/s recovering
Jan 20 19:05:05 compute-0 ceph-mon[75120]: osdmap e61: 3 total, 3 up, 3 in
Jan 20 19:05:05 compute-0 ceph-mon[75120]: 5.10 scrub starts
Jan 20 19:05:05 compute-0 ceph-mon[75120]: 5.10 scrub ok
Jan 20 19:05:06 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 20 19:05:06 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 20 19:05:06 compute-0 ceph-mon[75120]: 4.b scrub starts
Jan 20 19:05:06 compute-0 ceph-mon[75120]: 4.b scrub ok
Jan 20 19:05:06 compute-0 ceph-mon[75120]: pgmap v129: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 690 B/s, 1 keys/s, 14 objects/s recovering
Jan 20 19:05:06 compute-0 ceph-mon[75120]: 10.1d scrub starts
Jan 20 19:05:06 compute-0 ceph-mon[75120]: 10.1d scrub ok
Jan 20 19:05:06 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 20 19:05:06 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 20 19:05:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 20 19:05:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 20 19:05:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v130: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 19:05:07 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 20 19:05:07 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 20 19:05:07 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 20 19:05:07 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 20 19:05:08 compute-0 ceph-mon[75120]: 4.3 scrub starts
Jan 20 19:05:08 compute-0 ceph-mon[75120]: 4.3 scrub ok
Jan 20 19:05:08 compute-0 ceph-mon[75120]: 8.16 scrub starts
Jan 20 19:05:08 compute-0 ceph-mon[75120]: 8.16 scrub ok
Jan 20 19:05:08 compute-0 ceph-mon[75120]: pgmap v130: 305 pgs: 2 peering, 303 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 107 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 19:05:08 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 20 19:05:08 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 20 19:05:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e61 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:09 compute-0 ceph-mon[75120]: 4.0 scrub starts
Jan 20 19:05:09 compute-0 ceph-mon[75120]: 4.0 scrub ok
Jan 20 19:05:09 compute-0 ceph-mon[75120]: 3.1c scrub starts
Jan 20 19:05:09 compute-0 ceph-mon[75120]: 3.1c scrub ok
Jan 20 19:05:09 compute-0 ceph-mon[75120]: 10.1c scrub starts
Jan 20 19:05:09 compute-0 ceph-mon[75120]: 10.1c scrub ok
Jan 20 19:05:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 284 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 19:05:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 20 19:05:09 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 20 19:05:09 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Jan 20 19:05:09 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 20 19:05:09 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 20 19:05:09 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 20 19:05:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Jan 20 19:05:10 compute-0 ceph-mon[75120]: pgmap v131: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 284 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 19:05:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 20 19:05:10 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} : dispatch
Jan 20 19:05:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 19:05:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 19:05:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Jan 20 19:05:10 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430742264s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 active pruub 108.897323608s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:10 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:10 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Jan 20 19:05:10 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430679321s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 active pruub 108.897682190s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:10 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:10 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:10 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:10 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 20 19:05:10 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 20 19:05:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 20 19:05:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 20 19:05:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Jan 20 19:05:11 compute-0 ceph-mon[75120]: 4.c scrub starts
Jan 20 19:05:11 compute-0 ceph-mon[75120]: 4.c scrub ok
Jan 20 19:05:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 19:05:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Jan 20 19:05:11 compute-0 ceph-mon[75120]: osdmap e62: 3 total, 3 up, 3 in
Jan 20 19:05:11 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 20 19:05:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Jan 20 19:05:11 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 20 19:05:11 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Jan 20 19:05:11 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:11 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v134: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 327 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 19:05:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 20 19:05:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 20 19:05:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Jan 20 19:05:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 20 19:05:11 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 20 19:05:11 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 20 19:05:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Jan 20 19:05:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 19:05:12 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 19:05:12 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Jan 20 19:05:12 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Jan 20 19:05:12 compute-0 ceph-mon[75120]: 4.15 scrub starts
Jan 20 19:05:12 compute-0 ceph-mon[75120]: 4.15 scrub ok
Jan 20 19:05:12 compute-0 ceph-mon[75120]: 8.17 scrub starts
Jan 20 19:05:12 compute-0 ceph-mon[75120]: 8.17 scrub ok
Jan 20 19:05:12 compute-0 ceph-mon[75120]: 2.14 scrub starts
Jan 20 19:05:12 compute-0 ceph-mon[75120]: 2.14 scrub ok
Jan 20 19:05:12 compute-0 ceph-mon[75120]: osdmap e63: 3 total, 3 up, 3 in
Jan 20 19:05:12 compute-0 ceph-mon[75120]: pgmap v134: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 327 B/s, 1 keys/s, 1 objects/s recovering
Jan 20 19:05:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 20 19:05:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} : dispatch
Jan 20 19:05:12 compute-0 ceph-mon[75120]: 4.16 scrub starts
Jan 20 19:05:12 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 20 19:05:12 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 20 19:05:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465168953s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.653533936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:12 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463762283s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.653617859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473500252s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.663787842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:12 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:12 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467473984s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.658554077s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:12 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:12 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Jan 20 19:05:13 compute-0 ceph-mon[75120]: 4.16 scrub ok
Jan 20 19:05:13 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 19:05:13 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Jan 20 19:05:13 compute-0 ceph-mon[75120]: osdmap e64: 3 total, 3 up, 3 in
Jan 20 19:05:13 compute-0 ceph-mon[75120]: 10.1b scrub starts
Jan 20 19:05:13 compute-0 ceph-mon[75120]: 10.1b scrub ok
Jan 20 19:05:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Jan 20 19:05:13 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Jan 20 19:05:13 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:13 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:13 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:13 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:13 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:13 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:13 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:13 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:13 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:13 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 20 19:05:13 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 20 19:05:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v137: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 20 19:05:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 20 19:05:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Jan 20 19:05:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 20 19:05:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Jan 20 19:05:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 19:05:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 19:05:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Jan 20 19:05:14 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Jan 20 19:05:14 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457354546s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 active pruub 116.284957886s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:14 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470577240s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298309326s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:14 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457168579s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY pruub 116.284957886s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:14 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470458031s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298309326s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:14 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470116615s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298522949s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:14 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470049858s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298522949s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:14 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469445229s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298500061s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:14 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469401360s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298500061s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:14 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:14 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:14 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:14 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:14 compute-0 ceph-mon[75120]: osdmap e65: 3 total, 3 up, 3 in
Jan 20 19:05:14 compute-0 ceph-mon[75120]: 2.12 scrub starts
Jan 20 19:05:14 compute-0 ceph-mon[75120]: 2.12 scrub ok
Jan 20 19:05:14 compute-0 ceph-mon[75120]: pgmap v137: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 20 19:05:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} : dispatch
Jan 20 19:05:14 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:14 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:14 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:14 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:14 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 20 19:05:14 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 20 19:05:15 compute-0 sudo[98145]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukswulrfapxofuvohhwscnbykuapttyq ; /usr/bin/python3'
Jan 20 19:05:15 compute-0 sudo[98145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:05:15 compute-0 python3[98147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:05:15 compute-0 podman[98148]: 2026-01-20 19:05:15.233803042 +0000 UTC m=+0.047108698 container create ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:05:15 compute-0 systemd[1]: Started libpod-conmon-ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b.scope.
Jan 20 19:05:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:05:15 compute-0 podman[98148]: 2026-01-20 19:05:15.215462048 +0000 UTC m=+0.028767714 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:05:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2bf5a7e332ab3044b0f0af63c735ca17a715f88a699362dbc44f490b63c6d0/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cc2bf5a7e332ab3044b0f0af63c735ca17a715f88a699362dbc44f490b63c6d0/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:15 compute-0 podman[98148]: 2026-01-20 19:05:15.327898013 +0000 UTC m=+0.141203739 container init ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 19:05:15 compute-0 podman[98148]: 2026-01-20 19:05:15.33495205 +0000 UTC m=+0.148257696 container start ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:05:15 compute-0 podman[98148]: 2026-01-20 19:05:15.339099518 +0000 UTC m=+0.152405164 container attach ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:05:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Jan 20 19:05:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Jan 20 19:05:15 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.003064156s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577781677s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002538681s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577713013s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001266479s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577674866s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000796318s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577674866s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 19:05:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Jan 20 19:05:15 compute-0 ceph-mon[75120]: osdmap e66: 3 total, 3 up, 3 in
Jan 20 19:05:15 compute-0 ceph-mon[75120]: 4.17 scrub starts
Jan 20 19:05:15 compute-0 ceph-mon[75120]: 4.17 scrub ok
Jan 20 19:05:15 compute-0 ceph-mon[75120]: osdmap e67: 3 total, 3 up, 3 in
Jan 20 19:05:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v140: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:16 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 20 19:05:16 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 20 19:05:16 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 20 19:05:16 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 20 19:05:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Jan 20 19:05:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Jan 20 19:05:16 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Jan 20 19:05:16 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:16 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:16 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:16 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:16 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:16 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:16 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:16 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:16 compute-0 ceph-mon[75120]: pgmap v140: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:16 compute-0 ceph-mon[75120]: osdmap e68: 3 total, 3 up, 3 in
Jan 20 19:05:16 compute-0 vigorous_ellis[98163]: could not fetch user info: no user info saved
Jan 20 19:05:16 compute-0 systemd[1]: libpod-ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b.scope: Deactivated successfully.
Jan 20 19:05:16 compute-0 podman[98148]: 2026-01-20 19:05:16.518722133 +0000 UTC m=+1.332027789 container died ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:05:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-cc2bf5a7e332ab3044b0f0af63c735ca17a715f88a699362dbc44f490b63c6d0-merged.mount: Deactivated successfully.
Jan 20 19:05:16 compute-0 podman[98148]: 2026-01-20 19:05:16.5620667 +0000 UTC m=+1.375372356 container remove ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b (image=quay.io/ceph/ceph:v20, name=vigorous_ellis, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 19:05:16 compute-0 systemd[1]: libpod-conmon-ea2c4408572ca5c66c8696c7cf6171bfdae0620f040b4b0fcd35b70bec0cf41b.scope: Deactivated successfully.
Jan 20 19:05:16 compute-0 sudo[98145]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:16 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 20 19:05:16 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 20 19:05:16 compute-0 sudo[98285]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udfqqhmmscbntafibybotaidhmczjnve ; /usr/bin/python3'
Jan 20 19:05:16 compute-0 sudo[98285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:05:16 compute-0 python3[98287]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v20 --fsid 90fff835-31df-513f-a409-b6642f04e6ac -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:05:16 compute-0 podman[98288]: 2026-01-20 19:05:16.89783513 +0000 UTC m=+0.043159684 container create 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:05:16 compute-0 systemd[1]: Started libpod-conmon-33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248.scope.
Jan 20 19:05:16 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e87b5462c7d57772b472a5a9ada2d15a99bb1d6e0e9ddc7d41181697a808612/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e87b5462c7d57772b472a5a9ada2d15a99bb1d6e0e9ddc7d41181697a808612/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:16 compute-0 podman[98288]: 2026-01-20 19:05:16.879605348 +0000 UTC m=+0.024929922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph:v20
Jan 20 19:05:16 compute-0 podman[98288]: 2026-01-20 19:05:16.977196622 +0000 UTC m=+0.122521196 container init 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:05:16 compute-0 podman[98288]: 2026-01-20 19:05:16.983131243 +0000 UTC m=+0.128455797 container start 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True)
Jan 20 19:05:16 compute-0 podman[98288]: 2026-01-20 19:05:16.986893782 +0000 UTC m=+0.132218356 container attach 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]: {
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "user_id": "openstack",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "display_name": "openstack",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "email": "",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "suspended": 0,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "max_buckets": 1000,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "subusers": [],
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "keys": [
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         {
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:             "user": "openstack",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:             "access_key": "O6AWP42HJEVFMD2DU0GN",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:             "secret_key": "C5DBN8T35EW8FmXv62zVf5jg7zJ1IL2pEqBHcnxE",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:             "active": true,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:             "create_date": "2026-01-20T19:05:17.194920Z"
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         }
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     ],
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "swift_keys": [],
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "caps": [],
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "op_mask": "read, write, delete",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "default_placement": "",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "default_storage_class": "",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "placement_tags": [],
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "bucket_quota": {
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "enabled": false,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "check_on_raw": false,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "max_size": -1,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "max_size_kb": 0,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "max_objects": -1
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     },
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "user_quota": {
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "enabled": false,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "check_on_raw": false,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "max_size": -1,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "max_size_kb": 0,
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:         "max_objects": -1
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     },
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "temp_url_keys": [],
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "type": "rgw",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "mfa_ids": [],
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "account_id": "",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "path": "/",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "create_date": "2026-01-20T19:05:17.194402Z",
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "tags": [],
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]:     "group_ids": []
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]: }
Jan 20 19:05:17 compute-0 vigilant_shamir[98303]: 
Jan 20 19:05:17 compute-0 systemd[1]: libpod-33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248.scope: Deactivated successfully.
Jan 20 19:05:17 compute-0 podman[98288]: 2026-01-20 19:05:17.231180613 +0000 UTC m=+0.376505177 container died 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:05:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e87b5462c7d57772b472a5a9ada2d15a99bb1d6e0e9ddc7d41181697a808612-merged.mount: Deactivated successfully.
Jan 20 19:05:17 compute-0 podman[98288]: 2026-01-20 19:05:17.273828584 +0000 UTC m=+0.419153148 container remove 33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248 (image=quay.io/ceph/ceph:v20, name=vigilant_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:05:17 compute-0 systemd[1]: libpod-conmon-33e2ac4fb0888234c426c17e540566e832eff7dc8c6688afaef1746ff871c248.scope: Deactivated successfully.
Jan 20 19:05:17 compute-0 sudo[98285]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Jan 20 19:05:17 compute-0 ceph-mon[75120]: 11.13 scrub starts
Jan 20 19:05:17 compute-0 ceph-mon[75120]: 11.13 scrub ok
Jan 20 19:05:17 compute-0 ceph-mon[75120]: 10.18 scrub starts
Jan 20 19:05:17 compute-0 ceph-mon[75120]: 10.18 scrub ok
Jan 20 19:05:17 compute-0 ceph-mon[75120]: 4.19 scrub starts
Jan 20 19:05:17 compute-0 ceph-mon[75120]: 4.19 scrub ok
Jan 20 19:05:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Jan 20 19:05:17 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Jan 20 19:05:17 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956954956s) [2] async=[2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 active pruub 120.844100952s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:17 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956759453s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844100952s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:17 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957309723s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 active pruub 120.844993591s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:17 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956269264s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 active pruub 120.844009399s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:17 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957247734s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 120.844993591s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:17 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956115723s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844009399s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:17 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956018448s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 active pruub 120.844070435s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:17 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.955755234s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY pruub 120.844070435s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=68'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=68'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 pct=0'0 crt=68'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:17 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v143: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 20 19:05:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 20 19:05:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Jan 20 19:05:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Jan 20 19:05:18 compute-0 ceph-mon[75120]: osdmap e69: 3 total, 3 up, 3 in
Jan 20 19:05:18 compute-0 ceph-mon[75120]: pgmap v143: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:18 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Jan 20 19:05:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:18 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 20 19:05:18 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 20 19:05:19 compute-0 sshd-session[98401]: Connection closed by 147.185.132.67 port 50813
Jan 20 19:05:19 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 20 19:05:19 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 20 19:05:19 compute-0 ceph-mon[75120]: 2.10 scrub starts
Jan 20 19:05:19 compute-0 ceph-mon[75120]: 2.10 scrub ok
Jan 20 19:05:19 compute-0 ceph-mon[75120]: osdmap e70: 3 total, 3 up, 3 in
Jan 20 19:05:19 compute-0 ceph-mon[75120]: 5.14 scrub starts
Jan 20 19:05:19 compute-0 ceph-mon[75120]: 5.14 scrub ok
Jan 20 19:05:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v145: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 31 op/s
Jan 20 19:05:20 compute-0 ceph-mon[75120]: 5.17 scrub starts
Jan 20 19:05:20 compute-0 ceph-mon[75120]: 5.17 scrub ok
Jan 20 19:05:20 compute-0 ceph-mon[75120]: pgmap v145: 305 pgs: 4 unknown, 4 remapped+peering, 297 active+clean; 460 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 31 op/s
Jan 20 19:05:21 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 20 19:05:21 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 20 19:05:21 compute-0 ceph-mon[75120]: 5.8 scrub starts
Jan 20 19:05:21 compute-0 ceph-mon[75120]: 5.8 scrub ok
Jan 20 19:05:21 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 20 19:05:21 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 20 19:05:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v146: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 76 op/s; 526 B/s, 11 objects/s recovering
Jan 20 19:05:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 20 19:05:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 20 19:05:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Jan 20 19:05:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 20 19:05:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Jan 20 19:05:22 compute-0 ceph-mon[75120]: 2.16 scrub starts
Jan 20 19:05:22 compute-0 ceph-mon[75120]: pgmap v146: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 76 op/s; 526 B/s, 11 objects/s recovering
Jan 20 19:05:22 compute-0 ceph-mon[75120]: 2.16 scrub ok
Jan 20 19:05:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 20 19:05:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} : dispatch
Jan 20 19:05:22 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 19:05:22 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 19:05:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Jan 20 19:05:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Jan 20 19:05:22 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901213646s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 active pruub 126.894775391s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:22 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901094437s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.894775391s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:22 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:22 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.466033936s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 active pruub 117.657966614s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:22 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:22 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465965271s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 active pruub 117.658271790s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:22 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:22 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:22 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 20 19:05:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 20 19:05:23 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 20 19:05:23 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 20 19:05:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Jan 20 19:05:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Jan 20 19:05:23 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Jan 20 19:05:23 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:23 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:23 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:23 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:23 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:23 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 19:05:23 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Jan 20 19:05:23 compute-0 ceph-mon[75120]: osdmap e71: 3 total, 3 up, 3 in
Jan 20 19:05:23 compute-0 ceph-mon[75120]: 2.e scrub starts
Jan 20 19:05:23 compute-0 ceph-mon[75120]: 2.e scrub ok
Jan 20 19:05:23 compute-0 ceph-mon[75120]: osdmap e72: 3 total, 3 up, 3 in
Jan 20 19:05:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 76 op/s; 526 B/s, 11 objects/s recovering
Jan 20 19:05:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 20 19:05:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 20 19:05:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Jan 20 19:05:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 20 19:05:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Jan 20 19:05:24 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 19:05:24 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 19:05:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Jan 20 19:05:24 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Jan 20 19:05:24 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190241814s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 active pruub 116.897994995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:24 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:24 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:24 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:24 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:24 compute-0 ceph-mon[75120]: 7.1e scrub starts
Jan 20 19:05:24 compute-0 ceph-mon[75120]: 7.1e scrub ok
Jan 20 19:05:24 compute-0 ceph-mon[75120]: pgmap v149: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 37 KiB/s rd, 1.7 KiB/s wr, 76 op/s; 526 B/s, 11 objects/s recovering
Jan 20 19:05:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 20 19:05:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} : dispatch
Jan 20 19:05:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 19:05:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Jan 20 19:05:24 compute-0 ceph-mon[75120]: osdmap e73: 3 total, 3 up, 3 in
Jan 20 19:05:24 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 20 19:05:24 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 20 19:05:25 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 20 19:05:25 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 20 19:05:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Jan 20 19:05:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Jan 20 19:05:25 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Jan 20 19:05:25 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997115135s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 active pruub 124.712516785s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:25 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:25 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993810654s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 active pruub 124.709548950s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:25 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:25 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=73/74 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:25 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:25 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:25 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:25 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:25 compute-0 ceph-mon[75120]: 2.8 scrub starts
Jan 20 19:05:25 compute-0 ceph-mon[75120]: 2.8 scrub ok
Jan 20 19:05:25 compute-0 ceph-mon[75120]: osdmap e74: 3 total, 3 up, 3 in
Jan 20 19:05:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v152: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:25 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 20 19:05:25 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 20 19:05:26 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 20 19:05:26 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 20 19:05:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Jan 20 19:05:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Jan 20 19:05:26 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Jan 20 19:05:26 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:26 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:26 compute-0 ceph-mon[75120]: 10.5 scrub starts
Jan 20 19:05:26 compute-0 ceph-mon[75120]: 10.5 scrub ok
Jan 20 19:05:26 compute-0 ceph-mon[75120]: pgmap v152: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:26 compute-0 ceph-mon[75120]: 5.3 scrub starts
Jan 20 19:05:26 compute-0 ceph-mon[75120]: 5.3 scrub ok
Jan 20 19:05:26 compute-0 ceph-mon[75120]: 5.a scrub starts
Jan 20 19:05:26 compute-0 ceph-mon[75120]: osdmap e75: 3 total, 3 up, 3 in
Jan 20 19:05:27 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 20 19:05:27 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 20 19:05:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v154: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:27 compute-0 ceph-mon[75120]: 5.a scrub ok
Jan 20 19:05:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 20 19:05:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 20 19:05:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 20 19:05:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 20 19:05:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:28 compute-0 ceph-mon[75120]: 2.c scrub starts
Jan 20 19:05:28 compute-0 ceph-mon[75120]: 2.c scrub ok
Jan 20 19:05:28 compute-0 ceph-mon[75120]: pgmap v154: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 20 19:05:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 20 19:05:29 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 20 19:05:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v155: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:29 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 20 19:05:29 compute-0 ceph-mon[75120]: 7.1d scrub starts
Jan 20 19:05:29 compute-0 ceph-mon[75120]: 7.1d scrub ok
Jan 20 19:05:29 compute-0 ceph-mon[75120]: 10.1e scrub starts
Jan 20 19:05:29 compute-0 ceph-mon[75120]: 10.1e scrub ok
Jan 20 19:05:30 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 20 19:05:30 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 20 19:05:30 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 20 19:05:30 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 20 19:05:30 compute-0 ceph-mon[75120]: 5.15 scrub starts
Jan 20 19:05:30 compute-0 ceph-mon[75120]: 5.15 scrub ok
Jan 20 19:05:30 compute-0 ceph-mon[75120]: 5.b scrub starts
Jan 20 19:05:30 compute-0 ceph-mon[75120]: pgmap v155: 305 pgs: 1 peering, 2 remapped+peering, 302 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:30 compute-0 ceph-mon[75120]: 5.b scrub ok
Jan 20 19:05:31 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 20 19:05:31 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 20 19:05:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:05:31
Jan 20 19:05:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:05:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Some PGs (0.009836) are inactive; try again later
Jan 20 19:05:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v156: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 20 19:05:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 20 19:05:31 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 20 19:05:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Jan 20 19:05:31 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 20 19:05:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Jan 20 19:05:31 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 19:05:31 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 19:05:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Jan 20 19:05:31 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Jan 20 19:05:31 compute-0 ceph-mon[75120]: 5.2 scrub starts
Jan 20 19:05:31 compute-0 ceph-mon[75120]: 5.2 scrub ok
Jan 20 19:05:31 compute-0 ceph-mon[75120]: 10.3 scrub starts
Jan 20 19:05:31 compute-0 ceph-mon[75120]: 10.3 scrub ok
Jan 20 19:05:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 20 19:05:31 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} : dispatch
Jan 20 19:05:32 compute-0 sshd-session[98402]: Accepted publickey for zuul from 192.168.122.30 port 54544 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:05:32 compute-0 systemd-logind[797]: New session 34 of user zuul.
Jan 20 19:05:32 compute-0 systemd[1]: Started Session 34 of User zuul.
Jan 20 19:05:32 compute-0 sshd-session[98402]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:05:32 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.373511314s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 active pruub 127.006271362s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:32 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:32 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:32 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 20 19:05:32 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 20 19:05:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Jan 20 19:05:32 compute-0 ceph-mon[75120]: 2.0 scrub starts
Jan 20 19:05:32 compute-0 ceph-mon[75120]: 2.0 scrub ok
Jan 20 19:05:32 compute-0 ceph-mon[75120]: pgmap v156: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 20 19:05:32 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 19:05:32 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Jan 20 19:05:32 compute-0 ceph-mon[75120]: osdmap e76: 3 total, 3 up, 3 in
Jan 20 19:05:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Jan 20 19:05:32 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Jan 20 19:05:32 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=76/77 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:33 compute-0 python3.9[98555]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:05:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e77 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v159: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 20 19:05:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 20 19:05:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 20 19:05:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Jan 20 19:05:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 20 19:05:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Jan 20 19:05:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 19:05:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 19:05:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Jan 20 19:05:33 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Jan 20 19:05:33 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928627014s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 active pruub 132.051498413s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:33 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928561211s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY pruub 132.051498413s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:33 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:33 compute-0 ceph-mon[75120]: 10.17 scrub starts
Jan 20 19:05:33 compute-0 ceph-mon[75120]: 10.17 scrub ok
Jan 20 19:05:33 compute-0 ceph-mon[75120]: osdmap e77: 3 total, 3 up, 3 in
Jan 20 19:05:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 20 19:05:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} : dispatch
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:05:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:05:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Jan 20 19:05:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Jan 20 19:05:34 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Jan 20 19:05:34 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:34 compute-0 ceph-mon[75120]: pgmap v159: 305 pgs: 305 active+clean; 461 KiB data, 99 MiB used, 60 GiB / 60 GiB avail; 74 B/s, 1 objects/s recovering
Jan 20 19:05:34 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 19:05:34 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Jan 20 19:05:34 compute-0 ceph-mon[75120]: osdmap e78: 3 total, 3 up, 3 in
Jan 20 19:05:35 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 20 19:05:35 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 20 19:05:35 compute-0 sudo[98698]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:05:35 compute-0 sudo[98698]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:35 compute-0 sudo[98698]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:35 compute-0 sudo[98723]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:05:35 compute-0 sudo[98723]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:35 compute-0 sudo[98828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydmxixijzyjymequhdgzqmvlyortyskt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935935.0369754-27-195903437124054/AnsiballZ_command.py'
Jan 20 19:05:35 compute-0 sudo[98828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:05:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v162: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:35 compute-0 python3.9[98836]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:05:35 compute-0 ceph-mon[75120]: osdmap e79: 3 total, 3 up, 3 in
Jan 20 19:05:35 compute-0 sudo[98723]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:05:35 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:05:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:05:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:05:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:05:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:05:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:05:35 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:05:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:05:35 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:05:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:05:35 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:05:35 compute-0 sudo[98862]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:05:35 compute-0 sudo[98862]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:35 compute-0 sudo[98862]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:35 compute-0 sudo[98888]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:05:35 compute-0 sudo[98888]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 20 19:05:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 20 19:05:36 compute-0 podman[98928]: 2026-01-20 19:05:36.224743131 +0000 UTC m=+0.047748958 container create af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:05:36 compute-0 systemd[1]: Started libpod-conmon-af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f.scope.
Jan 20 19:05:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:05:36 compute-0 podman[98928]: 2026-01-20 19:05:36.29751185 +0000 UTC m=+0.120517697 container init af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:05:36 compute-0 podman[98928]: 2026-01-20 19:05:36.203992552 +0000 UTC m=+0.026998399 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:05:36 compute-0 podman[98928]: 2026-01-20 19:05:36.30542938 +0000 UTC m=+0.128435207 container start af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:05:36 compute-0 podman[98928]: 2026-01-20 19:05:36.308917554 +0000 UTC m=+0.131923411 container attach af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:05:36 compute-0 competent_moser[98945]: 167 167
Jan 20 19:05:36 compute-0 systemd[1]: libpod-af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f.scope: Deactivated successfully.
Jan 20 19:05:36 compute-0 podman[98928]: 2026-01-20 19:05:36.314391866 +0000 UTC m=+0.137397703 container died af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:05:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-916886354eb1f52a99886afc516d1d208c3a98b294b1cda7361d954a3b199812-merged.mount: Deactivated successfully.
Jan 20 19:05:36 compute-0 podman[98928]: 2026-01-20 19:05:36.354188492 +0000 UTC m=+0.177194319 container remove af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_moser, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:05:36 compute-0 systemd[1]: libpod-conmon-af78a041894e3afb00a7338dfe60dfd75055801e56ce6ad991fd1e2e9046852f.scope: Deactivated successfully.
Jan 20 19:05:36 compute-0 podman[98968]: 2026-01-20 19:05:36.502435045 +0000 UTC m=+0.040924814 container create b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:05:36 compute-0 systemd[1]: Started libpod-conmon-b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200.scope.
Jan 20 19:05:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:36 compute-0 podman[98968]: 2026-01-20 19:05:36.486062261 +0000 UTC m=+0.024552010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:36 compute-0 podman[98968]: 2026-01-20 19:05:36.592387047 +0000 UTC m=+0.130876826 container init b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 19:05:36 compute-0 podman[98968]: 2026-01-20 19:05:36.600622815 +0000 UTC m=+0.139112564 container start b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 19:05:36 compute-0 podman[98968]: 2026-01-20 19:05:36.604628111 +0000 UTC m=+0.143118150 container attach b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:05:36 compute-0 ceph-mon[75120]: 8.13 scrub starts
Jan 20 19:05:36 compute-0 ceph-mon[75120]: 8.13 scrub ok
Jan 20 19:05:36 compute-0 ceph-mon[75120]: pgmap v162: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:05:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:05:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:05:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:05:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:05:36 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:05:37 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 20 19:05:37 compute-0 zen_heisenberg[98985]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:05:37 compute-0 zen_heisenberg[98985]: --> All data devices are unavailable
Jan 20 19:05:37 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 20 19:05:37 compute-0 systemd[1]: libpod-b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200.scope: Deactivated successfully.
Jan 20 19:05:37 compute-0 podman[98968]: 2026-01-20 19:05:37.131724041 +0000 UTC m=+0.670213830 container died b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:05:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-914bc53d29c9881c233e79c0609f96bd32d8fdd98003af673c5e07a7c06a6f7c-merged.mount: Deactivated successfully.
Jan 20 19:05:37 compute-0 podman[98968]: 2026-01-20 19:05:37.191042326 +0000 UTC m=+0.729532085 container remove b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zen_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:05:37 compute-0 systemd[1]: libpod-conmon-b13c6ba95e2ab42ab94fc204257409ad4c5e4ba52b23b775381600c95afe8200.scope: Deactivated successfully.
Jan 20 19:05:37 compute-0 sudo[98888]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:37 compute-0 sudo[99016]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:05:37 compute-0 sudo[99016]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:37 compute-0 sudo[99016]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:37 compute-0 sudo[99041]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:05:37 compute-0 sudo[99041]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v163: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:37 compute-0 podman[99080]: 2026-01-20 19:05:37.680939751 +0000 UTC m=+0.071818917 container create ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 19:05:37 compute-0 ceph-mon[75120]: 7.7 scrub starts
Jan 20 19:05:37 compute-0 ceph-mon[75120]: 7.7 scrub ok
Jan 20 19:05:37 compute-0 ceph-mon[75120]: 8.8 scrub starts
Jan 20 19:05:37 compute-0 ceph-mon[75120]: 8.8 scrub ok
Jan 20 19:05:37 compute-0 systemd[1]: Started libpod-conmon-ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098.scope.
Jan 20 19:05:37 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:05:37 compute-0 podman[99080]: 2026-01-20 19:05:37.66134566 +0000 UTC m=+0.052224856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:05:37 compute-0 podman[99080]: 2026-01-20 19:05:37.758427893 +0000 UTC m=+0.149307089 container init ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:05:37 compute-0 podman[99080]: 2026-01-20 19:05:37.767116031 +0000 UTC m=+0.157995207 container start ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:05:37 compute-0 admiring_gould[99096]: 167 167
Jan 20 19:05:37 compute-0 systemd[1]: libpod-ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098.scope: Deactivated successfully.
Jan 20 19:05:37 compute-0 podman[99080]: 2026-01-20 19:05:37.771420225 +0000 UTC m=+0.162299421 container attach ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 19:05:37 compute-0 podman[99080]: 2026-01-20 19:05:37.771971029 +0000 UTC m=+0.162850215 container died ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:05:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-9888e5a3688b5c0ea8ee49ba80965201f1d51b00bb1a3db31c39fea0ab471b93-merged.mount: Deactivated successfully.
Jan 20 19:05:37 compute-0 podman[99080]: 2026-01-20 19:05:37.855221289 +0000 UTC m=+0.246100465 container remove ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_gould, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 19:05:37 compute-0 systemd[1]: libpod-conmon-ddac3c96abd24fdbf4fa4de77a7c1a77faf7b0ef8583b1e9488f7f85814ad098.scope: Deactivated successfully.
Jan 20 19:05:38 compute-0 podman[99120]: 2026-01-20 19:05:38.031951187 +0000 UTC m=+0.053144748 container create ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:05:38 compute-0 systemd[1]: Started libpod-conmon-ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646.scope.
Jan 20 19:05:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 20 19:05:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 20 19:05:38 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:38 compute-0 podman[99120]: 2026-01-20 19:05:38.009618631 +0000 UTC m=+0.030812202 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:05:38 compute-0 podman[99120]: 2026-01-20 19:05:38.109975752 +0000 UTC m=+0.131169303 container init ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:05:38 compute-0 podman[99120]: 2026-01-20 19:05:38.116163261 +0000 UTC m=+0.137356812 container start ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:05:38 compute-0 podman[99120]: 2026-01-20 19:05:38.120111765 +0000 UTC m=+0.141305346 container attach ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:05:38 compute-0 funny_joliot[99137]: {
Jan 20 19:05:38 compute-0 funny_joliot[99137]:     "0": [
Jan 20 19:05:38 compute-0 funny_joliot[99137]:         {
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "devices": [
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "/dev/loop3"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             ],
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_name": "ceph_lv0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_size": "21470642176",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "name": "ceph_lv0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "tags": {
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cluster_name": "ceph",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.crush_device_class": "",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.encrypted": "0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.objectstore": "bluestore",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osd_id": "0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.type": "block",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.vdo": "0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.with_tpm": "0"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             },
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "type": "block",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "vg_name": "ceph_vg0"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:         }
Jan 20 19:05:38 compute-0 funny_joliot[99137]:     ],
Jan 20 19:05:38 compute-0 funny_joliot[99137]:     "1": [
Jan 20 19:05:38 compute-0 funny_joliot[99137]:         {
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "devices": [
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "/dev/loop4"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             ],
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_name": "ceph_lv1",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_size": "21470642176",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "name": "ceph_lv1",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "tags": {
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cluster_name": "ceph",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.crush_device_class": "",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.encrypted": "0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.objectstore": "bluestore",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osd_id": "1",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.type": "block",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.vdo": "0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.with_tpm": "0"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             },
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "type": "block",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "vg_name": "ceph_vg1"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:         }
Jan 20 19:05:38 compute-0 funny_joliot[99137]:     ],
Jan 20 19:05:38 compute-0 funny_joliot[99137]:     "2": [
Jan 20 19:05:38 compute-0 funny_joliot[99137]:         {
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "devices": [
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "/dev/loop5"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             ],
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_name": "ceph_lv2",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_size": "21470642176",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "name": "ceph_lv2",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "tags": {
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.cluster_name": "ceph",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.crush_device_class": "",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.encrypted": "0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.objectstore": "bluestore",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osd_id": "2",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.type": "block",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.vdo": "0",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:                 "ceph.with_tpm": "0"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             },
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "type": "block",
Jan 20 19:05:38 compute-0 funny_joliot[99137]:             "vg_name": "ceph_vg2"
Jan 20 19:05:38 compute-0 funny_joliot[99137]:         }
Jan 20 19:05:38 compute-0 funny_joliot[99137]:     ]
Jan 20 19:05:38 compute-0 funny_joliot[99137]: }
Jan 20 19:05:38 compute-0 systemd[1]: libpod-ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646.scope: Deactivated successfully.
Jan 20 19:05:38 compute-0 podman[99120]: 2026-01-20 19:05:38.419061731 +0000 UTC m=+0.440255282 container died ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 19:05:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-f3283580cdbe80f026a6b82892c45d27efee982baae9666590ce500e4d7b1f4a-merged.mount: Deactivated successfully.
Jan 20 19:05:38 compute-0 podman[99120]: 2026-01-20 19:05:38.468635032 +0000 UTC m=+0.489828593 container remove ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:05:38 compute-0 systemd[1]: libpod-conmon-ab19303856c219365c056fadd37dcff86807807c522cfaab37d31f0bb9837646.scope: Deactivated successfully.
Jan 20 19:05:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:38 compute-0 sudo[99041]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:38 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 20 19:05:38 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 20 19:05:38 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 20 19:05:38 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 20 19:05:38 compute-0 sudo[99158]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:05:38 compute-0 sudo[99158]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:38 compute-0 sudo[99158]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:38 compute-0 sudo[99183]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:05:38 compute-0 sudo[99183]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:38 compute-0 ceph-mon[75120]: pgmap v163: 305 pgs: 1 peering, 304 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:38 compute-0 ceph-mon[75120]: 8.a scrub starts
Jan 20 19:05:38 compute-0 ceph-mon[75120]: 8.a scrub ok
Jan 20 19:05:38 compute-0 ceph-mon[75120]: 2.2 scrub starts
Jan 20 19:05:38 compute-0 ceph-mon[75120]: 2.2 scrub ok
Jan 20 19:05:38 compute-0 podman[99221]: 2026-01-20 19:05:38.949013858 +0000 UTC m=+0.050006462 container create c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:05:38 compute-0 systemd[1]: Started libpod-conmon-c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc.scope.
Jan 20 19:05:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:05:39 compute-0 podman[99221]: 2026-01-20 19:05:38.928237599 +0000 UTC m=+0.029230233 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:05:39 compute-0 podman[99221]: 2026-01-20 19:05:39.025315442 +0000 UTC m=+0.126308056 container init c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:05:39 compute-0 podman[99221]: 2026-01-20 19:05:39.033298894 +0000 UTC m=+0.134291478 container start c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:05:39 compute-0 ecstatic_moore[99238]: 167 167
Jan 20 19:05:39 compute-0 systemd[1]: libpod-c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc.scope: Deactivated successfully.
Jan 20 19:05:39 compute-0 conmon[99238]: conmon c6ae66f36e581213c8fb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc.scope/container/memory.events
Jan 20 19:05:39 compute-0 podman[99221]: 2026-01-20 19:05:39.037557947 +0000 UTC m=+0.138550561 container attach c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:05:39 compute-0 podman[99221]: 2026-01-20 19:05:39.037870574 +0000 UTC m=+0.138863158 container died c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:05:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-a92ceaebfb7f9798902ee0618b90c37beb002b547bff214e60b19b722159ab45-merged.mount: Deactivated successfully.
Jan 20 19:05:39 compute-0 podman[99221]: 2026-01-20 19:05:39.074804931 +0000 UTC m=+0.175797515 container remove c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=ecstatic_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:05:39 compute-0 systemd[1]: libpod-conmon-c6ae66f36e581213c8fb02de03e7caccfe8422a6426c6dd64e7411f7352ccfdc.scope: Deactivated successfully.
Jan 20 19:05:39 compute-0 podman[99265]: 2026-01-20 19:05:39.21495569 +0000 UTC m=+0.039820828 container create 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:05:39 compute-0 systemd[1]: Started libpod-conmon-056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d.scope.
Jan 20 19:05:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:05:39 compute-0 podman[99265]: 2026-01-20 19:05:39.290018184 +0000 UTC m=+0.114883342 container init 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:05:39 compute-0 podman[99265]: 2026-01-20 19:05:39.197111951 +0000 UTC m=+0.021977119 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:05:39 compute-0 podman[99265]: 2026-01-20 19:05:39.295304871 +0000 UTC m=+0.120170009 container start 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:05:39 compute-0 podman[99265]: 2026-01-20 19:05:39.298264802 +0000 UTC m=+0.123129970 container attach 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 20 19:05:39 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 20 19:05:39 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 20 19:05:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 19:05:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 20 19:05:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 20 19:05:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Jan 20 19:05:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 20 19:05:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Jan 20 19:05:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 19:05:39 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 19:05:39 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Jan 20 19:05:39 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Jan 20 19:05:39 compute-0 ceph-mon[75120]: 5.0 scrub starts
Jan 20 19:05:39 compute-0 ceph-mon[75120]: 5.0 scrub ok
Jan 20 19:05:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 20 19:05:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} : dispatch
Jan 20 19:05:39 compute-0 lvm[99367]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:05:39 compute-0 lvm[99368]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:05:39 compute-0 lvm[99368]: VG ceph_vg1 finished
Jan 20 19:05:39 compute-0 lvm[99367]: VG ceph_vg0 finished
Jan 20 19:05:39 compute-0 lvm[99370]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:05:39 compute-0 lvm[99370]: VG ceph_vg2 finished
Jan 20 19:05:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481973648s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 active pruub 133.654129028s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485882759s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 active pruub 133.658874512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:39 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:39 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:39 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:40 compute-0 friendly_robinson[99283]: {}
Jan 20 19:05:40 compute-0 systemd[1]: libpod-056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d.scope: Deactivated successfully.
Jan 20 19:05:40 compute-0 podman[99265]: 2026-01-20 19:05:40.074290364 +0000 UTC m=+0.899155532 container died 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:05:40 compute-0 systemd[1]: libpod-056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d.scope: Consumed 1.288s CPU time.
Jan 20 19:05:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-b911bef188033b983d9c58f0001fb680d0fe56e92e06af6e3f392a45164f1a85-merged.mount: Deactivated successfully.
Jan 20 19:05:40 compute-0 podman[99265]: 2026-01-20 19:05:40.530909448 +0000 UTC m=+1.355774606 container remove 056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_robinson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:05:40 compute-0 systemd[1]: libpod-conmon-056b47f6b9fc6440a824d7cda96321501fd1bd92819cc8f9b7814b502944597d.scope: Deactivated successfully.
Jan 20 19:05:40 compute-0 sudo[99183]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:05:40 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:05:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:05:40 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:05:40 compute-0 sudo[99385]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:05:40 compute-0 sudo[99385]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:05:40 compute-0 sudo[99385]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Jan 20 19:05:40 compute-0 ceph-mon[75120]: 10.0 scrub starts
Jan 20 19:05:40 compute-0 ceph-mon[75120]: 10.0 scrub ok
Jan 20 19:05:40 compute-0 ceph-mon[75120]: pgmap v164: 305 pgs: 305 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 19:05:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 19:05:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Jan 20 19:05:40 compute-0 ceph-mon[75120]: osdmap e80: 3 total, 3 up, 3 in
Jan 20 19:05:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:05:40 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:05:40 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Jan 20 19:05:40 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Jan 20 19:05:40 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:40 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:40 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:40 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:40 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 20 19:05:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 20 19:05:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v167: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 19:05:41 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 20 19:05:41 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 20 19:05:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Jan 20 19:05:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Jan 20 19:05:41 compute-0 ceph-mon[75120]: osdmap e81: 3 total, 3 up, 3 in
Jan 20 19:05:41 compute-0 ceph-mon[75120]: 11.0 scrub starts
Jan 20 19:05:41 compute-0 ceph-mon[75120]: 11.0 scrub ok
Jan 20 19:05:41 compute-0 ceph-mon[75120]: 5.5 scrub starts
Jan 20 19:05:41 compute-0 ceph-mon[75120]: 5.5 scrub ok
Jan 20 19:05:41 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Jan 20 19:05:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 20 19:05:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 20 19:05:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Jan 20 19:05:42 compute-0 ceph-mon[75120]: pgmap v167: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Jan 20 19:05:42 compute-0 ceph-mon[75120]: osdmap e82: 3 total, 3 up, 3 in
Jan 20 19:05:42 compute-0 ceph-mon[75120]: 8.3 scrub starts
Jan 20 19:05:42 compute-0 ceph-mon[75120]: 8.3 scrub ok
Jan 20 19:05:42 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Jan 20 19:05:42 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Jan 20 19:05:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.692206383s) [2] async=[2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 active pruub 142.686187744s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693123817s) [2] async=[2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 active pruub 142.688095093s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:42 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:42 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:42 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:42 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:42 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:43 compute-0 sudo[98828]: pam_unix(sudo:session): session closed for user root
Jan 20 19:05:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e83 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:43 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 20 19:05:43 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 20 19:05:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v170: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:43 compute-0 sshd-session[98405]: Connection closed by 192.168.122.30 port 54544
Jan 20 19:05:43 compute-0 sshd-session[98402]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:05:43 compute-0 systemd[1]: session-34.scope: Deactivated successfully.
Jan 20 19:05:43 compute-0 systemd[1]: session-34.scope: Consumed 8.513s CPU time.
Jan 20 19:05:43 compute-0 systemd-logind[797]: Session 34 logged out. Waiting for processes to exit.
Jan 20 19:05:43 compute-0 systemd-logind[797]: Removed session 34.
Jan 20 19:05:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Jan 20 19:05:43 compute-0 ceph-mon[75120]: osdmap e83: 3 total, 3 up, 3 in
Jan 20 19:05:43 compute-0 ceph-mon[75120]: 2.f scrub starts
Jan 20 19:05:43 compute-0 ceph-mon[75120]: 2.f scrub ok
Jan 20 19:05:43 compute-0 ceph-mon[75120]: pgmap v170: 305 pgs: 2 unknown, 303 active+clean; 461 KiB data, 100 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Jan 20 19:05:43 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Jan 20 19:05:43 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:43 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.2905063468931614e-06 of space, bias 4.0, pg target 0.0015486076162717936 quantized to 16 (current 16)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:05:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:05:44 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 20 19:05:44 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 20 19:05:44 compute-0 ceph-mon[75120]: osdmap e84: 3 total, 3 up, 3 in
Jan 20 19:05:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 20 19:05:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 20 19:05:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 639 B/s wr, 21 op/s; 87 B/s, 2 objects/s recovering
Jan 20 19:05:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 20 19:05:45 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 20 19:05:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Jan 20 19:05:45 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 20 19:05:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Jan 20 19:05:45 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 19:05:45 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 19:05:45 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Jan 20 19:05:45 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Jan 20 19:05:45 compute-0 ceph-mon[75120]: 2.1 scrub starts
Jan 20 19:05:45 compute-0 ceph-mon[75120]: 2.1 scrub ok
Jan 20 19:05:45 compute-0 ceph-mon[75120]: 8.1 scrub starts
Jan 20 19:05:45 compute-0 ceph-mon[75120]: 8.1 scrub ok
Jan 20 19:05:45 compute-0 ceph-mon[75120]: pgmap v172: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8.5 KiB/s rd, 639 B/s wr, 21 op/s; 87 B/s, 2 objects/s recovering
Jan 20 19:05:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 20 19:05:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} : dispatch
Jan 20 19:05:46 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 20 19:05:46 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 20 19:05:46 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.617008209s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 active pruub 147.753036499s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:46 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.616838455s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY pruub 147.753036499s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Jan 20 19:05:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 19:05:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Jan 20 19:05:46 compute-0 ceph-mon[75120]: osdmap e85: 3 total, 3 up, 3 in
Jan 20 19:05:46 compute-0 ceph-mon[75120]: 5.6 scrub starts
Jan 20 19:05:46 compute-0 ceph-mon[75120]: 5.6 scrub ok
Jan 20 19:05:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Jan 20 19:05:46 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Jan 20 19:05:46 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 20 19:05:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 20 19:05:47 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 20 19:05:47 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 20 19:05:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v175: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 643 B/s wr, 21 op/s; 87 B/s, 2 objects/s recovering
Jan 20 19:05:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 20 19:05:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 20 19:05:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Jan 20 19:05:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 20 19:05:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Jan 20 19:05:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 19:05:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 19:05:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Jan 20 19:05:47 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Jan 20 19:05:47 compute-0 ceph-mon[75120]: osdmap e86: 3 total, 3 up, 3 in
Jan 20 19:05:47 compute-0 ceph-mon[75120]: 8.0 scrub starts
Jan 20 19:05:47 compute-0 ceph-mon[75120]: 8.0 scrub ok
Jan 20 19:05:47 compute-0 ceph-mon[75120]: 10.a scrub starts
Jan 20 19:05:47 compute-0 ceph-mon[75120]: 10.a scrub ok
Jan 20 19:05:47 compute-0 ceph-mon[75120]: pgmap v175: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 8.6 KiB/s rd, 643 B/s wr, 21 op/s; 87 B/s, 2 objects/s recovering
Jan 20 19:05:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 20 19:05:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} : dispatch
Jan 20 19:05:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 19:05:48 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Jan 20 19:05:48 compute-0 ceph-mon[75120]: osdmap e87: 3 total, 3 up, 3 in
Jan 20 19:05:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 355 B/s wr, 16 op/s; 84 B/s, 2 objects/s recovering
Jan 20 19:05:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 20 19:05:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 20 19:05:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Jan 20 19:05:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 20 19:05:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Jan 20 19:05:49 compute-0 ceph-mon[75120]: pgmap v177: 305 pgs: 305 active+clean; 461 KiB data, 117 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 355 B/s wr, 16 op/s; 84 B/s, 2 objects/s recovering
Jan 20 19:05:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 20 19:05:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} : dispatch
Jan 20 19:05:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 19:05:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 19:05:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Jan 20 19:05:49 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Jan 20 19:05:50 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 20 19:05:50 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 20 19:05:50 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120479584s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 active pruub 148.052017212s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:50 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120371819s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY pruub 148.052017212s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:50 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:50 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Jan 20 19:05:50 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 19:05:50 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Jan 20 19:05:50 compute-0 ceph-mon[75120]: osdmap e88: 3 total, 3 up, 3 in
Jan 20 19:05:50 compute-0 ceph-mon[75120]: 10.c scrub starts
Jan 20 19:05:50 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Jan 20 19:05:50 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Jan 20 19:05:50 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:51 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 20 19:05:51 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 20 19:05:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 20 19:05:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Jan 20 19:05:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 20 19:05:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Jan 20 19:05:52 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 20 19:05:52 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 20 19:05:52 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 20 19:05:52 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 20 19:05:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 19:05:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Jan 20 19:05:52 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Jan 20 19:05:52 compute-0 ceph-mon[75120]: 10.c scrub ok
Jan 20 19:05:52 compute-0 ceph-mon[75120]: osdmap e89: 3 total, 3 up, 3 in
Jan 20 19:05:52 compute-0 ceph-mon[75120]: 5.e scrub starts
Jan 20 19:05:52 compute-0 ceph-mon[75120]: 5.e scrub ok
Jan 20 19:05:52 compute-0 ceph-mon[75120]: pgmap v180: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 14 B/s, 0 objects/s recovering
Jan 20 19:05:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} : dispatch
Jan 20 19:05:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 20 19:05:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 20 19:05:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e90 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v182: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 20 19:05:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Jan 20 19:05:53 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 20 19:05:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Jan 20 19:05:53 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 19:05:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Jan 20 19:05:53 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Jan 20 19:05:53 compute-0 ceph-mon[75120]: 3.b scrub starts
Jan 20 19:05:53 compute-0 ceph-mon[75120]: 3.b scrub ok
Jan 20 19:05:53 compute-0 ceph-mon[75120]: 5.d scrub starts
Jan 20 19:05:53 compute-0 ceph-mon[75120]: 5.d scrub ok
Jan 20 19:05:53 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Jan 20 19:05:53 compute-0 ceph-mon[75120]: osdmap e90: 3 total, 3 up, 3 in
Jan 20 19:05:53 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} : dispatch
Jan 20 19:05:54 compute-0 ceph-mon[75120]: 11.c scrub starts
Jan 20 19:05:54 compute-0 ceph-mon[75120]: 11.c scrub ok
Jan 20 19:05:54 compute-0 ceph-mon[75120]: pgmap v182: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 12 B/s, 0 objects/s recovering
Jan 20 19:05:54 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Jan 20 19:05:54 compute-0 ceph-mon[75120]: osdmap e91: 3 total, 3 up, 3 in
Jan 20 19:05:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v184: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 108 B/s, 0 objects/s recovering
Jan 20 19:05:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Jan 20 19:05:55 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 20 19:05:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Jan 20 19:05:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} : dispatch
Jan 20 19:05:55 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 19:05:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Jan 20 19:05:55 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Jan 20 19:05:56 compute-0 ceph-mon[75120]: pgmap v184: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 108 B/s, 0 objects/s recovering
Jan 20 19:05:56 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Jan 20 19:05:56 compute-0 ceph-mon[75120]: osdmap e92: 3 total, 3 up, 3 in
Jan 20 19:05:57 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 20 19:05:57 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 20 19:05:57 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 20 19:05:57 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 20 19:05:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 20 19:05:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Jan 20 19:05:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 20 19:05:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Jan 20 19:05:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 19:05:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Jan 20 19:05:57 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Jan 20 19:05:57 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.109041214s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 active pruub 156.283859253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:57 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.108978271s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 156.283859253s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:57 compute-0 ceph-mon[75120]: 5.1c scrub starts
Jan 20 19:05:57 compute-0 ceph-mon[75120]: 5.1c scrub ok
Jan 20 19:05:57 compute-0 ceph-mon[75120]: 10.7 scrub starts
Jan 20 19:05:57 compute-0 ceph-mon[75120]: 10.7 scrub ok
Jan 20 19:05:57 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} : dispatch
Jan 20 19:05:57 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 20 19:05:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 20 19:05:58 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 20 19:05:58 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 20 19:05:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e93 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:05:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Jan 20 19:05:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Jan 20 19:05:58 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Jan 20 19:05:58 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:58 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:05:58 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:05:58 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:05:58 compute-0 ceph-mon[75120]: pgmap v186: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail; 102 B/s, 0 objects/s recovering
Jan 20 19:05:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Jan 20 19:05:58 compute-0 ceph-mon[75120]: osdmap e93: 3 total, 3 up, 3 in
Jan 20 19:05:58 compute-0 ceph-mon[75120]: 3.4 scrub starts
Jan 20 19:05:58 compute-0 ceph-mon[75120]: 3.4 scrub ok
Jan 20 19:05:58 compute-0 ceph-mon[75120]: 5.1b scrub starts
Jan 20 19:05:58 compute-0 ceph-mon[75120]: osdmap e94: 3 total, 3 up, 3 in
Jan 20 19:05:59 compute-0 sshd-session[99443]: Accepted publickey for zuul from 192.168.122.30 port 39384 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:05:59 compute-0 systemd-logind[797]: New session 35 of user zuul.
Jan 20 19:05:59 compute-0 systemd[1]: Started Session 35 of User zuul.
Jan 20 19:05:59 compute-0 sshd-session[99443]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:05:59 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 20 19:05:59 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 20 19:05:59 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 20 19:05:59 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 20 19:05:59 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 20 19:05:59 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 20 19:05:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Jan 20 19:05:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Jan 20 19:05:59 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Jan 20 19:05:59 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:05:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:05:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Jan 20 19:05:59 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 20 19:05:59 compute-0 ceph-mon[75120]: 5.1b scrub ok
Jan 20 19:05:59 compute-0 ceph-mon[75120]: 7.0 scrub starts
Jan 20 19:05:59 compute-0 ceph-mon[75120]: 7.0 scrub ok
Jan 20 19:05:59 compute-0 ceph-mon[75120]: 7.1a scrub starts
Jan 20 19:05:59 compute-0 ceph-mon[75120]: 7.1a scrub ok
Jan 20 19:05:59 compute-0 ceph-mon[75120]: 2.1c scrub starts
Jan 20 19:05:59 compute-0 ceph-mon[75120]: 2.1c scrub ok
Jan 20 19:05:59 compute-0 ceph-mon[75120]: osdmap e95: 3 total, 3 up, 3 in
Jan 20 19:05:59 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} : dispatch
Jan 20 19:05:59 compute-0 python3.9[99596]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 20 19:06:00 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 20 19:06:00 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 20 19:06:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Jan 20 19:06:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 19:06:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Jan 20 19:06:00 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Jan 20 19:06:00 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003457069s) [2] async=[2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 active pruub 163.980422974s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:00 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003147125s) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 163.980422974s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:00 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 pct=0'0 crt=68'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:00 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:01 compute-0 python3.9[99770]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:06:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Jan 20 19:06:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 20 19:06:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Jan 20 19:06:01 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 20 19:06:01 compute-0 sudo[99924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmglpjaxymowlawdtdvogdnptjllwalg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935961.4381423-40-39336701682841/AnsiballZ_command.py'
Jan 20 19:06:01 compute-0 sudo[99924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:06:02 compute-0 ceph-mon[75120]: pgmap v190: 305 pgs: 305 active+clean; 461 KiB data, 118 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:02 compute-0 ceph-mon[75120]: 4.18 scrub starts
Jan 20 19:06:02 compute-0 ceph-mon[75120]: 4.18 scrub ok
Jan 20 19:06:02 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Jan 20 19:06:02 compute-0 ceph-mon[75120]: osdmap e96: 3 total, 3 up, 3 in
Jan 20 19:06:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Jan 20 19:06:02 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Jan 20 19:06:02 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:02 compute-0 python3.9[99926]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:06:02 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 20 19:06:02 compute-0 sudo[99924]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:02 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 20 19:06:03 compute-0 sudo[100077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lddsbuxszeuugvvwnkugbwurqmrzxral ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935962.6830187-52-51059882397475/AnsiballZ_stat.py'
Jan 20 19:06:03 compute-0 sudo[100077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:06:03 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 20 19:06:03 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 20 19:06:03 compute-0 python3.9[100079]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:06:03 compute-0 sudo[100077]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:03 compute-0 ceph-mon[75120]: pgmap v192: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 20 19:06:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} : dispatch
Jan 20 19:06:03 compute-0 ceph-mon[75120]: osdmap e97: 3 total, 3 up, 3 in
Jan 20 19:06:03 compute-0 ceph-mon[75120]: 5.4 scrub starts
Jan 20 19:06:03 compute-0 ceph-mon[75120]: 5.4 scrub ok
Jan 20 19:06:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Jan 20 19:06:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 19:06:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Jan 20 19:06:03 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Jan 20 19:06:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.439196587s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 active pruub 164.286026001s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.438729286s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.286026001s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:03 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 20 19:06:03 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 20 19:06:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e98 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Jan 20 19:06:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Jan 20 19:06:03 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Jan 20 19:06:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:03 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:03 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v196: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 20 19:06:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Jan 20 19:06:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 20 19:06:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 20 19:06:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 20 19:06:04 compute-0 sudo[100231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavhzndnbsfkamygeeuoqkqvntuopqqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935963.6813662-63-42780504727751/AnsiballZ_file.py'
Jan 20 19:06:04 compute-0 sudo[100231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:06:04 compute-0 python3.9[100233]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:06:04 compute-0 ceph-mon[75120]: 3.0 scrub starts
Jan 20 19:06:04 compute-0 ceph-mon[75120]: 3.0 scrub ok
Jan 20 19:06:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Jan 20 19:06:04 compute-0 ceph-mon[75120]: osdmap e98: 3 total, 3 up, 3 in
Jan 20 19:06:04 compute-0 ceph-mon[75120]: 10.4 scrub starts
Jan 20 19:06:04 compute-0 ceph-mon[75120]: 10.4 scrub ok
Jan 20 19:06:04 compute-0 ceph-mon[75120]: osdmap e99: 3 total, 3 up, 3 in
Jan 20 19:06:04 compute-0 ceph-mon[75120]: pgmap v196: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 135 MiB used, 60 GiB / 60 GiB avail; 65 B/s, 1 objects/s recovering
Jan 20 19:06:04 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} : dispatch
Jan 20 19:06:04 compute-0 sudo[100231]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:04 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 20 19:06:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:04 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 20 19:06:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Jan 20 19:06:04 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 19:06:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Jan 20 19:06:04 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Jan 20 19:06:04 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861665726s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 active pruub 157.961791992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:04 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:04 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:04 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:04 compute-0 sudo[100383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcfvbdakpwwelnbttbzxivurrwwiuswd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935964.5233462-72-202056560872987/AnsiballZ_file.py'
Jan 20 19:06:04 compute-0 sudo[100383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:06:04 compute-0 python3.9[100385]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:06:05 compute-0 sudo[100383]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 20 19:06:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 20 19:06:05 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 20 19:06:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Jan 20 19:06:05 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 20 19:06:05 compute-0 ceph-mon[75120]: 11.a scrub starts
Jan 20 19:06:05 compute-0 ceph-mon[75120]: 11.a scrub ok
Jan 20 19:06:05 compute-0 ceph-mon[75120]: 10.1 scrub starts
Jan 20 19:06:05 compute-0 ceph-mon[75120]: 10.1 scrub ok
Jan 20 19:06:05 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Jan 20 19:06:05 compute-0 ceph-mon[75120]: osdmap e100: 3 total, 3 up, 3 in
Jan 20 19:06:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Jan 20 19:06:05 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Jan 20 19:06:05 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:05 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:05 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991423607s) [1] async=[1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 active pruub 168.984329224s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:05 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991363525s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY pruub 168.984329224s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:05 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:05 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:05 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:05 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:05 compute-0 python3.9[100535]: ansible-ansible.builtin.service_facts Invoked
Jan 20 19:06:05 compute-0 network[100552]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 19:06:05 compute-0 network[100553]: 'network-scripts' will be removed from distribution in near future.
Jan 20 19:06:05 compute-0 network[100554]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 19:06:06 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 20 19:06:06 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 20 19:06:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Jan 20 19:06:06 compute-0 ceph-mon[75120]: 3.2 scrub starts
Jan 20 19:06:06 compute-0 ceph-mon[75120]: 3.2 scrub ok
Jan 20 19:06:06 compute-0 ceph-mon[75120]: 5.7 scrub starts
Jan 20 19:06:06 compute-0 ceph-mon[75120]: 5.7 scrub ok
Jan 20 19:06:06 compute-0 ceph-mon[75120]: osdmap e101: 3 total, 3 up, 3 in
Jan 20 19:06:06 compute-0 ceph-mon[75120]: pgmap v199: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Jan 20 19:06:06 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Jan 20 19:06:06 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:06 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:07 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 20 19:06:07 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 20 19:06:07 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Jan 20 19:06:07 compute-0 ceph-mon[75120]: 10.8 scrub starts
Jan 20 19:06:07 compute-0 ceph-mon[75120]: 10.8 scrub ok
Jan 20 19:06:07 compute-0 ceph-mon[75120]: osdmap e102: 3 total, 3 up, 3 in
Jan 20 19:06:07 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Jan 20 19:06:07 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Jan 20 19:06:07 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993875504s) [0] async=[0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 active pruub 160.133941650s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:07 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:07 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:07 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e103 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Jan 20 19:06:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Jan 20 19:06:08 compute-0 ceph-mon[75120]: 2.18 scrub starts
Jan 20 19:06:08 compute-0 ceph-mon[75120]: 2.18 scrub ok
Jan 20 19:06:08 compute-0 ceph-mon[75120]: osdmap e103: 3 total, 3 up, 3 in
Jan 20 19:06:08 compute-0 ceph-mon[75120]: pgmap v202: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:08 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Jan 20 19:06:08 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=103/104 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v204: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:09 compute-0 ceph-mon[75120]: osdmap e104: 3 total, 3 up, 3 in
Jan 20 19:06:10 compute-0 python3.9[100815]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:06:10 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 20 19:06:10 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 20 19:06:10 compute-0 ceph-mon[75120]: pgmap v204: 305 pgs: 1 unknown, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:11 compute-0 python3.9[100965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:06:11 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 20 19:06:11 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 20 19:06:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 36 B/s, 1 objects/s recovering
Jan 20 19:06:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Jan 20 19:06:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 20 19:06:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Jan 20 19:06:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 19:06:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Jan 20 19:06:11 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Jan 20 19:06:11 compute-0 ceph-mon[75120]: 8.15 scrub starts
Jan 20 19:06:11 compute-0 ceph-mon[75120]: 8.15 scrub ok
Jan 20 19:06:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} : dispatch
Jan 20 19:06:12 compute-0 python3.9[101119]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:06:12 compute-0 ceph-mon[75120]: 2.19 scrub starts
Jan 20 19:06:12 compute-0 ceph-mon[75120]: 2.19 scrub ok
Jan 20 19:06:12 compute-0 ceph-mon[75120]: pgmap v205: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 36 B/s, 1 objects/s recovering
Jan 20 19:06:12 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Jan 20 19:06:12 compute-0 ceph-mon[75120]: osdmap e105: 3 total, 3 up, 3 in
Jan 20 19:06:12 compute-0 sudo[101275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nswotehsvnwchtyppxuxvcewhsxggkif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935972.526426-120-267374644129493/AnsiballZ_setup.py'
Jan 20 19:06:12 compute-0 sudo[101275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:06:13 compute-0 python3.9[101277]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:06:13 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 20 19:06:13 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 20 19:06:13 compute-0 sudo[101275]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e105 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 340 B/s wr, 7 op/s; 36 B/s, 1 objects/s recovering
Jan 20 19:06:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Jan 20 19:06:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 20 19:06:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Jan 20 19:06:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 19:06:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Jan 20 19:06:13 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Jan 20 19:06:13 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} : dispatch
Jan 20 19:06:13 compute-0 sudo[101359]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xspomzahiuoskfpixpfjsibziibwhmgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768935972.526426-120-267374644129493/AnsiballZ_dnf.py'
Jan 20 19:06:13 compute-0 sudo[101359]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:06:13 compute-0 python3.9[101361]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:06:14 compute-0 ceph-mon[75120]: 7.d scrub starts
Jan 20 19:06:14 compute-0 ceph-mon[75120]: 7.d scrub ok
Jan 20 19:06:14 compute-0 ceph-mon[75120]: pgmap v207: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 340 B/s wr, 7 op/s; 36 B/s, 1 objects/s recovering
Jan 20 19:06:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Jan 20 19:06:14 compute-0 ceph-mon[75120]: osdmap e106: 3 total, 3 up, 3 in
Jan 20 19:06:15 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 20 19:06:15 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 20 19:06:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 292 B/s wr, 6 op/s; 31 B/s, 1 objects/s recovering
Jan 20 19:06:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Jan 20 19:06:15 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 20 19:06:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Jan 20 19:06:15 compute-0 ceph-mon[75120]: 8.7 scrub starts
Jan 20 19:06:15 compute-0 ceph-mon[75120]: 8.7 scrub ok
Jan 20 19:06:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} : dispatch
Jan 20 19:06:15 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 19:06:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Jan 20 19:06:15 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Jan 20 19:06:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961336136s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 active pruub 173.299713135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:15 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961268425s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 173.299713135s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:15 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:16 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 20 19:06:16 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 20 19:06:16 compute-0 ceph-mon[75120]: pgmap v209: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 292 B/s wr, 6 op/s; 31 B/s, 1 objects/s recovering
Jan 20 19:06:16 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Jan 20 19:06:16 compute-0 ceph-mon[75120]: osdmap e107: 3 total, 3 up, 3 in
Jan 20 19:06:16 compute-0 ceph-mon[75120]: 11.5 scrub starts
Jan 20 19:06:16 compute-0 ceph-mon[75120]: 11.5 scrub ok
Jan 20 19:06:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Jan 20 19:06:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Jan 20 19:06:16 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Jan 20 19:06:16 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:16 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:16 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:16 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v212: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Jan 20 19:06:17 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 20 19:06:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Jan 20 19:06:17 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 19:06:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Jan 20 19:06:17 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Jan 20 19:06:17 compute-0 ceph-mon[75120]: osdmap e108: 3 total, 3 up, 3 in
Jan 20 19:06:17 compute-0 ceph-mon[75120]: pgmap v212: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:17 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} : dispatch
Jan 20 19:06:18 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 20 19:06:18 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 20 19:06:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e109 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:18 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Jan 20 19:06:18 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Jan 20 19:06:18 compute-0 ceph-mon[75120]: osdmap e109: 3 total, 3 up, 3 in
Jan 20 19:06:18 compute-0 ceph-mon[75120]: 3.d scrub starts
Jan 20 19:06:18 compute-0 ceph-mon[75120]: 3.d scrub ok
Jan 20 19:06:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Jan 20 19:06:18 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Jan 20 19:06:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:18 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737841606s) [2] async=[2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 active pruub 183.126052856s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:18 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737683296s) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 183.126052856s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v215: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:19 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 20 19:06:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Jan 20 19:06:19 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 20 19:06:19 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 20 19:06:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Jan 20 19:06:19 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 19:06:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Jan 20 19:06:19 compute-0 ceph-mon[75120]: osdmap e110: 3 total, 3 up, 3 in
Jan 20 19:06:19 compute-0 ceph-mon[75120]: pgmap v215: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:19 compute-0 ceph-mon[75120]: 2.b scrub starts
Jan 20 19:06:19 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} : dispatch
Jan 20 19:06:19 compute-0 ceph-mon[75120]: 2.b scrub ok
Jan 20 19:06:19 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Jan 20 19:06:19 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:20 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 20 19:06:20 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 20 19:06:20 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Jan 20 19:06:20 compute-0 ceph-mon[75120]: osdmap e111: 3 total, 3 up, 3 in
Jan 20 19:06:20 compute-0 ceph-mon[75120]: 8.5 scrub starts
Jan 20 19:06:20 compute-0 ceph-mon[75120]: 8.5 scrub ok
Jan 20 19:06:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v217: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 20 19:06:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Jan 20 19:06:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 20 19:06:21 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 20 19:06:21 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 20 19:06:22 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 20 19:06:22 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 20 19:06:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Jan 20 19:06:22 compute-0 ceph-mon[75120]: pgmap v217: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 87 B/s, 1 objects/s recovering
Jan 20 19:06:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} : dispatch
Jan 20 19:06:22 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 19:06:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Jan 20 19:06:22 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Jan 20 19:06:22 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038787842s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 active pruub 169.396194458s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:22 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:22 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Jan 20 19:06:23 compute-0 ceph-mon[75120]: 3.1e scrub starts
Jan 20 19:06:23 compute-0 ceph-mon[75120]: 3.1e scrub ok
Jan 20 19:06:23 compute-0 ceph-mon[75120]: 7.b scrub starts
Jan 20 19:06:23 compute-0 ceph-mon[75120]: 7.b scrub ok
Jan 20 19:06:23 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Jan 20 19:06:23 compute-0 ceph-mon[75120]: osdmap e112: 3 total, 3 up, 3 in
Jan 20 19:06:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Jan 20 19:06:23 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Jan 20 19:06:23 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:23 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:23 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:23 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:23 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 20 19:06:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:23 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 20 19:06:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 89 B/s, 1 objects/s recovering
Jan 20 19:06:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Jan 20 19:06:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 20 19:06:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 20 19:06:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 20 19:06:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Jan 20 19:06:24 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 19:06:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Jan 20 19:06:24 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Jan 20 19:06:24 compute-0 ceph-mon[75120]: osdmap e113: 3 total, 3 up, 3 in
Jan 20 19:06:24 compute-0 ceph-mon[75120]: 5.1e scrub starts
Jan 20 19:06:24 compute-0 ceph-mon[75120]: 5.1e scrub ok
Jan 20 19:06:24 compute-0 ceph-mon[75120]: pgmap v220: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 89 B/s, 1 objects/s recovering
Jan 20 19:06:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} : dispatch
Jan 20 19:06:24 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:24 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 20 19:06:24 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 20 19:06:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Jan 20 19:06:25 compute-0 ceph-mon[75120]: 11.7 scrub starts
Jan 20 19:06:25 compute-0 ceph-mon[75120]: 11.7 scrub ok
Jan 20 19:06:25 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Jan 20 19:06:25 compute-0 ceph-mon[75120]: osdmap e114: 3 total, 3 up, 3 in
Jan 20 19:06:25 compute-0 ceph-mon[75120]: 4.1a scrub starts
Jan 20 19:06:25 compute-0 ceph-mon[75120]: 4.1a scrub ok
Jan 20 19:06:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Jan 20 19:06:25 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978566170s) [0] async=[0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 active pruub 178.060546875s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:25 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:25 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Jan 20 19:06:25 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:25 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v223: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Jan 20 19:06:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Jan 20 19:06:25 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 20 19:06:25 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 20 19:06:25 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 20 19:06:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Jan 20 19:06:26 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 19:06:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Jan 20 19:06:26 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Jan 20 19:06:26 compute-0 ceph-mon[75120]: osdmap e115: 3 total, 3 up, 3 in
Jan 20 19:06:26 compute-0 ceph-mon[75120]: pgmap v223: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Jan 20 19:06:26 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} : dispatch
Jan 20 19:06:26 compute-0 ceph-mon[75120]: 4.1b scrub starts
Jan 20 19:06:26 compute-0 ceph-mon[75120]: 4.1b scrub ok
Jan 20 19:06:26 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=115/116 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:26 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705393791s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 active pruub 173.962097168s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:26 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:26 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:27 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 20 19:06:27 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 20 19:06:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Jan 20 19:06:27 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Jan 20 19:06:27 compute-0 ceph-mon[75120]: osdmap e116: 3 total, 3 up, 3 in
Jan 20 19:06:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Jan 20 19:06:27 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Jan 20 19:06:27 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:27 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:27 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:27 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v226: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Jan 20 19:06:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Jan 20 19:06:27 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:06:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Jan 20 19:06:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:06:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Jan 20 19:06:28 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Jan 20 19:06:28 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912906647s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 active pruub 176.032379150s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:28 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:28 compute-0 ceph-mon[75120]: 7.14 scrub starts
Jan 20 19:06:28 compute-0 ceph-mon[75120]: 7.14 scrub ok
Jan 20 19:06:28 compute-0 ceph-mon[75120]: osdmap e117: 3 total, 3 up, 3 in
Jan 20 19:06:28 compute-0 ceph-mon[75120]: pgmap v226: 305 pgs: 1 active+remapped, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 2 objects/s recovering
Jan 20 19:06:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} : dispatch
Jan 20 19:06:28 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:29 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Jan 20 19:06:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Jan 20 19:06:29 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Jan 20 19:06:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Jan 20 19:06:29 compute-0 ceph-mon[75120]: osdmap e118: 3 total, 3 up, 3 in
Jan 20 19:06:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 pct=0'0 crt=68'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:29 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:29 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901124954s) [0] async=[0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 active pruub 183.038589478s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:29 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:29 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:29 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:29 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:29 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 20 19:06:29 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 20 19:06:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 20 19:06:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 20 19:06:30 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 20 19:06:30 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 20 19:06:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Jan 20 19:06:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Jan 20 19:06:30 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Jan 20 19:06:30 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:30 compute-0 ceph-mon[75120]: osdmap e119: 3 total, 3 up, 3 in
Jan 20 19:06:30 compute-0 ceph-mon[75120]: 3.1d scrub starts
Jan 20 19:06:30 compute-0 ceph-mon[75120]: 3.1d scrub ok
Jan 20 19:06:30 compute-0 ceph-mon[75120]: pgmap v229: 305 pgs: 1 unknown, 1 active+remapped, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:30 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:31 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 20 19:06:31 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 20 19:06:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:06:31
Jan 20 19:06:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:06:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Some PGs (0.003279) are unknown; try again later
Jan 20 19:06:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Jan 20 19:06:31 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 20 19:06:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:31 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 20 19:06:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Jan 20 19:06:31 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Jan 20 19:06:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:31 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:06:31 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224273682s) [1] async=[1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 active pruub 184.419616699s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:06:31 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:06:31 compute-0 ceph-mon[75120]: 3.10 scrub starts
Jan 20 19:06:31 compute-0 ceph-mon[75120]: 3.10 scrub ok
Jan 20 19:06:31 compute-0 ceph-mon[75120]: 7.c scrub starts
Jan 20 19:06:31 compute-0 ceph-mon[75120]: 7.c scrub ok
Jan 20 19:06:31 compute-0 ceph-mon[75120]: osdmap e120: 3 total, 3 up, 3 in
Jan 20 19:06:32 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 20 19:06:32 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 20 19:06:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Jan 20 19:06:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Jan 20 19:06:32 compute-0 ceph-mon[75120]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Jan 20 19:06:32 compute-0 ceph-mon[75120]: 7.16 scrub starts
Jan 20 19:06:32 compute-0 ceph-mon[75120]: 7.16 scrub ok
Jan 20 19:06:32 compute-0 ceph-mon[75120]: 3.8 scrub starts
Jan 20 19:06:32 compute-0 ceph-mon[75120]: pgmap v231: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:32 compute-0 ceph-mon[75120]: 3.8 scrub ok
Jan 20 19:06:32 compute-0 ceph-mon[75120]: osdmap e121: 3 total, 3 up, 3 in
Jan 20 19:06:32 compute-0 ceph-mon[75120]: osdmap e122: 3 total, 3 up, 3 in
Jan 20 19:06:32 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:06:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 20 19:06:33 compute-0 ceph-mon[75120]: 8.19 scrub starts
Jan 20 19:06:33 compute-0 ceph-mon[75120]: 8.19 scrub ok
Jan 20 19:06:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 20 19:06:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:06:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:06:34 compute-0 ceph-mon[75120]: pgmap v234: 305 pgs: 1 peering, 1 unknown, 303 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 1 objects/s recovering
Jan 20 19:06:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 80 B/s, 3 objects/s recovering
Jan 20 19:06:35 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 20 19:06:35 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 20 19:06:35 compute-0 ceph-mon[75120]: 3.13 scrub starts
Jan 20 19:06:35 compute-0 ceph-mon[75120]: 3.13 scrub ok
Jan 20 19:06:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 20 19:06:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 20 19:06:36 compute-0 ceph-mon[75120]: pgmap v235: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 341 B/s wr, 7 op/s; 80 B/s, 3 objects/s recovering
Jan 20 19:06:36 compute-0 ceph-mon[75120]: 11.10 scrub starts
Jan 20 19:06:36 compute-0 ceph-mon[75120]: 11.10 scrub ok
Jan 20 19:06:37 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 20 19:06:37 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 20 19:06:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 291 B/s wr, 6 op/s; 47 B/s, 2 objects/s recovering
Jan 20 19:06:37 compute-0 ceph-mon[75120]: 7.17 scrub starts
Jan 20 19:06:37 compute-0 ceph-mon[75120]: 7.17 scrub ok
Jan 20 19:06:37 compute-0 ceph-mon[75120]: 11.15 scrub starts
Jan 20 19:06:37 compute-0 ceph-mon[75120]: 11.15 scrub ok
Jan 20 19:06:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:38 compute-0 ceph-mon[75120]: pgmap v236: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 291 B/s wr, 6 op/s; 47 B/s, 2 objects/s recovering
Jan 20 19:06:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 20 19:06:39 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 20 19:06:39 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 20 19:06:40 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 20 19:06:40 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 20 19:06:40 compute-0 sudo[101505]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:06:40 compute-0 sudo[101505]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:40 compute-0 sudo[101505]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:40 compute-0 ceph-mon[75120]: pgmap v237: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 255 B/s wr, 5 op/s; 41 B/s, 1 objects/s recovering
Jan 20 19:06:40 compute-0 ceph-mon[75120]: 7.1f scrub starts
Jan 20 19:06:40 compute-0 ceph-mon[75120]: 7.1f scrub ok
Jan 20 19:06:40 compute-0 sudo[101530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:06:40 compute-0 sudo[101530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:41 compute-0 sudo[101530]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:06:41 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:06:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:06:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:06:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:06:41 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:06:41 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:06:41 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:06:41 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 20 19:06:41 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 20 19:06:41 compute-0 sudo[101586]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:06:41 compute-0 sudo[101586]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:41 compute-0 sudo[101586]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:41 compute-0 sudo[101611]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:06:41 compute-0 sudo[101611]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 205 B/s wr, 4 op/s; 33 B/s, 1 objects/s recovering
Jan 20 19:06:41 compute-0 podman[101647]: 2026-01-20 19:06:41.75205893 +0000 UTC m=+0.040173468 container create 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:06:41 compute-0 ceph-mon[75120]: 3.1b scrub starts
Jan 20 19:06:41 compute-0 ceph-mon[75120]: 3.1b scrub ok
Jan 20 19:06:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:06:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:06:41 compute-0 ceph-mon[75120]: 11.3 scrub starts
Jan 20 19:06:41 compute-0 ceph-mon[75120]: 11.3 scrub ok
Jan 20 19:06:41 compute-0 ceph-mon[75120]: pgmap v238: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 205 B/s wr, 4 op/s; 33 B/s, 1 objects/s recovering
Jan 20 19:06:41 compute-0 systemd[1]: Started libpod-conmon-7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36.scope.
Jan 20 19:06:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:41 compute-0 podman[101647]: 2026-01-20 19:06:41.734917547 +0000 UTC m=+0.023032115 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:06:41 compute-0 podman[101647]: 2026-01-20 19:06:41.845820461 +0000 UTC m=+0.133935039 container init 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:06:41 compute-0 podman[101647]: 2026-01-20 19:06:41.854786293 +0000 UTC m=+0.142900841 container start 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:06:41 compute-0 podman[101647]: 2026-01-20 19:06:41.858486288 +0000 UTC m=+0.146600846 container attach 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:06:41 compute-0 objective_chatelet[101663]: 167 167
Jan 20 19:06:41 compute-0 systemd[1]: libpod-7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36.scope: Deactivated successfully.
Jan 20 19:06:41 compute-0 podman[101647]: 2026-01-20 19:06:41.861909327 +0000 UTC m=+0.150023865 container died 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:06:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d943b9c3a501da3b14f2cac1f5614b5e10f6a7688972973242168ce25577f4b-merged.mount: Deactivated successfully.
Jan 20 19:06:41 compute-0 podman[101647]: 2026-01-20 19:06:41.903495871 +0000 UTC m=+0.191610419 container remove 7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_chatelet, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:06:41 compute-0 systemd[1]: libpod-conmon-7b0c692ba195a085cfc2af03215d3d6a26195c4d82594f5b2b67b9d78da48c36.scope: Deactivated successfully.
Jan 20 19:06:42 compute-0 podman[101688]: 2026-01-20 19:06:42.077575837 +0000 UTC m=+0.044728056 container create 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030)
Jan 20 19:06:42 compute-0 systemd[1]: Started libpod-conmon-8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2.scope.
Jan 20 19:06:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:42 compute-0 podman[101688]: 2026-01-20 19:06:42.056422601 +0000 UTC m=+0.023574860 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:06:42 compute-0 podman[101688]: 2026-01-20 19:06:42.161915795 +0000 UTC m=+0.129068014 container init 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:06:42 compute-0 podman[101688]: 2026-01-20 19:06:42.167234293 +0000 UTC m=+0.134386502 container start 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:06:42 compute-0 podman[101688]: 2026-01-20 19:06:42.17217911 +0000 UTC m=+0.139331319 container attach 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:06:42 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 20 19:06:42 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 20 19:06:42 compute-0 charming_mirzakhani[101705]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:06:42 compute-0 charming_mirzakhani[101705]: --> All data devices are unavailable
Jan 20 19:06:42 compute-0 systemd[1]: libpod-8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2.scope: Deactivated successfully.
Jan 20 19:06:42 compute-0 podman[101688]: 2026-01-20 19:06:42.662122494 +0000 UTC m=+0.629274713 container died 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:06:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f990d2d4312e4569f5e6f9254514ec6af5fc4b8064d01a12c5d161b88d2f07f-merged.mount: Deactivated successfully.
Jan 20 19:06:42 compute-0 podman[101688]: 2026-01-20 19:06:42.703674237 +0000 UTC m=+0.670826446 container remove 8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=charming_mirzakhani, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 19:06:42 compute-0 systemd[1]: libpod-conmon-8d75a145974e65593e87810dccea9cb37672c3e5589c2a80976fc949d2f4e9b2.scope: Deactivated successfully.
Jan 20 19:06:42 compute-0 sudo[101611]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:42 compute-0 sudo[101737]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:06:42 compute-0 sudo[101737]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:42 compute-0 sudo[101737]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:42 compute-0 sudo[101765]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:06:42 compute-0 sudo[101765]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:43 compute-0 podman[101801]: 2026-01-20 19:06:43.118459579 +0000 UTC m=+0.041056251 container create 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:06:43 compute-0 systemd[1]: Started libpod-conmon-4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611.scope.
Jan 20 19:06:43 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:43 compute-0 podman[101801]: 2026-01-20 19:06:43.099967012 +0000 UTC m=+0.022563684 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:06:43 compute-0 podman[101801]: 2026-01-20 19:06:43.24627562 +0000 UTC m=+0.168872332 container init 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:06:43 compute-0 podman[101801]: 2026-01-20 19:06:43.253108327 +0000 UTC m=+0.175704999 container start 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 19:06:43 compute-0 awesome_shannon[101817]: 167 167
Jan 20 19:06:43 compute-0 systemd[1]: libpod-4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611.scope: Deactivated successfully.
Jan 20 19:06:43 compute-0 conmon[101817]: conmon 4e1eb00a3925a24f2203 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611.scope/container/memory.events
Jan 20 19:06:43 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 20 19:06:43 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 20 19:06:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 186 B/s wr, 4 op/s; 30 B/s, 1 objects/s recovering
Jan 20 19:06:43 compute-0 podman[101801]: 2026-01-20 19:06:43.687708622 +0000 UTC m=+0.610305294 container attach 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:06:43 compute-0 podman[101801]: 2026-01-20 19:06:43.688148103 +0000 UTC m=+0.610744785 container died 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:06:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:43 compute-0 ceph-mon[75120]: 11.12 scrub starts
Jan 20 19:06:43 compute-0 ceph-mon[75120]: 11.12 scrub ok
Jan 20 19:06:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-2fe398a2abb63578d6976c009c0407435bce0658561d2f9eb2de9763266c59de-merged.mount: Deactivated successfully.
Jan 20 19:06:43 compute-0 podman[101801]: 2026-01-20 19:06:43.775293893 +0000 UTC m=+0.697890575 container remove 4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=awesome_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:06:43 compute-0 systemd[1]: libpod-conmon-4e1eb00a3925a24f22038be3629de41e1c2dd154ec5d3ee3bcc2509c7ead8611.scope: Deactivated successfully.
Jan 20 19:06:43 compute-0 podman[101843]: 2026-01-20 19:06:43.994371801 +0000 UTC m=+0.109676624 container create 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 19:06:44 compute-0 podman[101843]: 2026-01-20 19:06:43.908932045 +0000 UTC m=+0.024236898 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:06:44 compute-0 systemd[1]: Started libpod-conmon-1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57.scope.
Jan 20 19:06:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 20 19:06:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 20 19:06:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:44 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:44 compute-0 podman[101843]: 2026-01-20 19:06:44.068835694 +0000 UTC m=+0.184140537 container init 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:06:44 compute-0 podman[101843]: 2026-01-20 19:06:44.07447298 +0000 UTC m=+0.189777823 container start 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:06:44 compute-0 podman[101843]: 2026-01-20 19:06:44.077754074 +0000 UTC m=+0.193058907 container attach 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]: {
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:     "0": [
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:         {
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "devices": [
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "/dev/loop3"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             ],
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_name": "ceph_lv0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_size": "21470642176",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "name": "ceph_lv0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "tags": {
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cluster_name": "ceph",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.crush_device_class": "",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.encrypted": "0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.objectstore": "bluestore",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osd_id": "0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.type": "block",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.vdo": "0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.with_tpm": "0"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             },
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "type": "block",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "vg_name": "ceph_vg0"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:         }
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:     ],
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:     "1": [
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:         {
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "devices": [
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "/dev/loop4"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             ],
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_name": "ceph_lv1",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_size": "21470642176",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "name": "ceph_lv1",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "tags": {
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cluster_name": "ceph",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.crush_device_class": "",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.encrypted": "0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.objectstore": "bluestore",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osd_id": "1",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.type": "block",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.vdo": "0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.with_tpm": "0"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             },
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "type": "block",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "vg_name": "ceph_vg1"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:         }
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:     ],
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:     "2": [
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:         {
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "devices": [
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "/dev/loop5"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             ],
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_name": "ceph_lv2",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_size": "21470642176",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "name": "ceph_lv2",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "tags": {
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.cluster_name": "ceph",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.crush_device_class": "",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.encrypted": "0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.objectstore": "bluestore",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osd_id": "2",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.type": "block",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.vdo": "0",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:                 "ceph.with_tpm": "0"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             },
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "type": "block",
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:             "vg_name": "ceph_vg2"
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:         }
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]:     ]
Jan 20 19:06:44 compute-0 quizzical_franklin[101860]: }
Jan 20 19:06:44 compute-0 systemd[1]: libpod-1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57.scope: Deactivated successfully.
Jan 20 19:06:44 compute-0 podman[101843]: 2026-01-20 19:06:44.360772283 +0000 UTC m=+0.476077126 container died 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:06:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-62672b228a469185d2b51b53d343aaecb9e56f55fe0b518a1a7c4efb72a6ce64-merged.mount: Deactivated successfully.
Jan 20 19:06:44 compute-0 podman[101843]: 2026-01-20 19:06:44.417178681 +0000 UTC m=+0.532483514 container remove 1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quizzical_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:06:44 compute-0 systemd[1]: libpod-conmon-1085f62aabf6b6f7d510fd80a310d1376b81236dc410aaaa6d3272e114a8ef57.scope: Deactivated successfully.
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:06:44 compute-0 sudo[101765]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:06:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:06:44 compute-0 sudo[101881]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:06:44 compute-0 sudo[101881]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:44 compute-0 sudo[101881]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:44 compute-0 sudo[101906]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:06:44 compute-0 sudo[101906]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:44 compute-0 ceph-mon[75120]: 8.10 scrub starts
Jan 20 19:06:44 compute-0 ceph-mon[75120]: 8.10 scrub ok
Jan 20 19:06:44 compute-0 ceph-mon[75120]: pgmap v239: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 186 B/s wr, 4 op/s; 30 B/s, 1 objects/s recovering
Jan 20 19:06:44 compute-0 podman[101943]: 2026-01-20 19:06:44.916058234 +0000 UTC m=+0.075667895 container create 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:06:44 compute-0 systemd[1]: Started libpod-conmon-4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f.scope.
Jan 20 19:06:44 compute-0 podman[101943]: 2026-01-20 19:06:44.869072401 +0000 UTC m=+0.028682082 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:06:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:44 compute-0 podman[101943]: 2026-01-20 19:06:44.994849889 +0000 UTC m=+0.154459580 container init 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:06:45 compute-0 podman[101943]: 2026-01-20 19:06:45.000636448 +0000 UTC m=+0.160246119 container start 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:06:45 compute-0 amazing_kalam[101959]: 167 167
Jan 20 19:06:45 compute-0 systemd[1]: libpod-4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f.scope: Deactivated successfully.
Jan 20 19:06:45 compute-0 podman[101943]: 2026-01-20 19:06:45.006647214 +0000 UTC m=+0.166256895 container attach 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Jan 20 19:06:45 compute-0 podman[101943]: 2026-01-20 19:06:45.007341572 +0000 UTC m=+0.166951233 container died 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:06:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-3de09d00de97397fc24e6ff1831e9759ca792175ce5ec1d9f526ca62960b5be6-merged.mount: Deactivated successfully.
Jan 20 19:06:45 compute-0 podman[101943]: 2026-01-20 19:06:45.061708626 +0000 UTC m=+0.221318287 container remove 4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:06:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 20 19:06:45 compute-0 systemd[1]: libpod-conmon-4196573f580da28ee1d1250a1f1a04de902cf1eb99f07af2e3ff760d796f898f.scope: Deactivated successfully.
Jan 20 19:06:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 20 19:06:45 compute-0 podman[101982]: 2026-01-20 19:06:45.283950416 +0000 UTC m=+0.053789611 container create 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:06:45 compute-0 systemd[1]: Started libpod-conmon-5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96.scope.
Jan 20 19:06:45 compute-0 podman[101982]: 2026-01-20 19:06:45.262377438 +0000 UTC m=+0.032216633 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:06:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:06:45 compute-0 podman[101982]: 2026-01-20 19:06:45.403331269 +0000 UTC m=+0.173170464 container init 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:06:45 compute-0 podman[101982]: 2026-01-20 19:06:45.411937291 +0000 UTC m=+0.181776456 container start 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:06:45 compute-0 podman[101982]: 2026-01-20 19:06:45.415516314 +0000 UTC m=+0.185355529 container attach 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:06:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 170 B/s wr, 3 op/s; 27 B/s, 1 objects/s recovering
Jan 20 19:06:45 compute-0 ceph-mon[75120]: 7.10 scrub starts
Jan 20 19:06:45 compute-0 ceph-mon[75120]: 7.10 scrub ok
Jan 20 19:06:45 compute-0 ceph-mon[75120]: 3.14 scrub starts
Jan 20 19:06:45 compute-0 ceph-mon[75120]: 3.14 scrub ok
Jan 20 19:06:46 compute-0 lvm[102075]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:06:46 compute-0 lvm[102075]: VG ceph_vg0 finished
Jan 20 19:06:46 compute-0 lvm[102078]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:06:46 compute-0 lvm[102078]: VG ceph_vg1 finished
Jan 20 19:06:46 compute-0 lvm[102080]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:06:46 compute-0 lvm[102080]: VG ceph_vg2 finished
Jan 20 19:06:46 compute-0 lvm[102081]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:06:46 compute-0 lvm[102081]: VG ceph_vg1 finished
Jan 20 19:06:46 compute-0 amazing_blackwell[101999]: {}
Jan 20 19:06:46 compute-0 systemd[1]: libpod-5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96.scope: Deactivated successfully.
Jan 20 19:06:46 compute-0 systemd[1]: libpod-5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96.scope: Consumed 1.372s CPU time.
Jan 20 19:06:46 compute-0 podman[101982]: 2026-01-20 19:06:46.313065625 +0000 UTC m=+1.082904790 container died 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:06:46 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 20 19:06:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-fd29aa5840987472461549ac0ce27b189f1e8194a3bbae2079c23e0263d10fa9-merged.mount: Deactivated successfully.
Jan 20 19:06:46 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 20 19:06:46 compute-0 podman[101982]: 2026-01-20 19:06:46.411293691 +0000 UTC m=+1.181132856 container remove 5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=amazing_blackwell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:06:46 compute-0 systemd[1]: libpod-conmon-5bedb7667600fd76f75288e98843a5d1e5e050a186d34d5c5734bbc60a8eff96.scope: Deactivated successfully.
Jan 20 19:06:46 compute-0 sudo[101906]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:06:46 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:06:46 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:06:46 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:06:46 compute-0 sudo[102095]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:06:46 compute-0 sudo[102095]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:06:46 compute-0 sudo[102095]: pam_unix(sudo:session): session closed for user root
Jan 20 19:06:46 compute-0 ceph-mon[75120]: pgmap v240: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 170 B/s wr, 3 op/s; 27 B/s, 1 objects/s recovering
Jan 20 19:06:46 compute-0 ceph-mon[75120]: 11.d scrub starts
Jan 20 19:06:46 compute-0 ceph-mon[75120]: 11.d scrub ok
Jan 20 19:06:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:06:46 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:06:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 20 19:06:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 20 19:06:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:47 compute-0 ceph-mon[75120]: 11.1d scrub starts
Jan 20 19:06:47 compute-0 ceph-mon[75120]: 11.1d scrub ok
Jan 20 19:06:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:48 compute-0 ceph-mon[75120]: pgmap v241: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:49 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 20 19:06:49 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 20 19:06:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:49 compute-0 ceph-mon[75120]: 8.1e scrub starts
Jan 20 19:06:49 compute-0 ceph-mon[75120]: 8.1e scrub ok
Jan 20 19:06:49 compute-0 ceph-mon[75120]: pgmap v242: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:50 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 20 19:06:50 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 20 19:06:50 compute-0 ceph-mon[75120]: 7.12 scrub starts
Jan 20 19:06:50 compute-0 ceph-mon[75120]: 7.12 scrub ok
Jan 20 19:06:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:51 compute-0 ceph-mon[75120]: pgmap v243: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:52 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 20 19:06:52 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 20 19:06:52 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 20 19:06:52 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 20 19:06:52 compute-0 ceph-mon[75120]: 7.1 scrub starts
Jan 20 19:06:52 compute-0 ceph-mon[75120]: 7.1 scrub ok
Jan 20 19:06:52 compute-0 ceph-mon[75120]: 3.f scrub starts
Jan 20 19:06:52 compute-0 ceph-mon[75120]: 3.f scrub ok
Jan 20 19:06:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 20 19:06:53 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 20 19:06:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:53 compute-0 ceph-mon[75120]: 5.11 scrub starts
Jan 20 19:06:53 compute-0 ceph-mon[75120]: 5.11 scrub ok
Jan 20 19:06:53 compute-0 ceph-mon[75120]: pgmap v244: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:54 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 20 19:06:54 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 20 19:06:54 compute-0 ceph-mon[75120]: 2.17 scrub starts
Jan 20 19:06:54 compute-0 ceph-mon[75120]: 2.17 scrub ok
Jan 20 19:06:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:55 compute-0 ceph-mon[75120]: pgmap v245: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:56 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 20 19:06:56 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 20 19:06:56 compute-0 ceph-mon[75120]: 8.b scrub starts
Jan 20 19:06:56 compute-0 ceph-mon[75120]: 8.b scrub ok
Jan 20 19:06:57 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 20 19:06:57 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 20 19:06:57 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 20 19:06:57 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 20 19:06:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:58 compute-0 ceph-mon[75120]: 11.b scrub starts
Jan 20 19:06:58 compute-0 ceph-mon[75120]: 11.b scrub ok
Jan 20 19:06:58 compute-0 ceph-mon[75120]: 7.4 scrub starts
Jan 20 19:06:58 compute-0 ceph-mon[75120]: 7.4 scrub ok
Jan 20 19:06:58 compute-0 ceph-mon[75120]: pgmap v246: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:06:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:06:59 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 20 19:06:59 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 20 19:06:59 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 20 19:06:59 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 20 19:06:59 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 20 19:06:59 compute-0 ceph-mon[75120]: 11.8 scrub starts
Jan 20 19:06:59 compute-0 ceph-mon[75120]: 11.8 scrub ok
Jan 20 19:06:59 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 20 19:06:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:00 compute-0 sudo[101359]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:00 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 20 19:07:00 compute-0 ceph-mon[75120]: 10.1a scrub starts
Jan 20 19:07:00 compute-0 ceph-mon[75120]: 10.1a scrub ok
Jan 20 19:07:00 compute-0 ceph-mon[75120]: 10.16 scrub starts
Jan 20 19:07:00 compute-0 ceph-mon[75120]: 10.16 scrub ok
Jan 20 19:07:00 compute-0 ceph-mon[75120]: pgmap v247: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:00 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 20 19:07:00 compute-0 sudo[102269]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywdkqedkashtrachglpfujmbhwljimzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936020.355239-132-172902250602074/AnsiballZ_command.py'
Jan 20 19:07:00 compute-0 sudo[102269]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:00 compute-0 python3.9[102271]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:07:01 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 20 19:07:01 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 20 19:07:01 compute-0 sudo[102269]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:01 compute-0 ceph-mon[75120]: 11.4 scrub starts
Jan 20 19:07:01 compute-0 ceph-mon[75120]: 11.4 scrub ok
Jan 20 19:07:01 compute-0 ceph-mon[75120]: 4.e scrub starts
Jan 20 19:07:01 compute-0 ceph-mon[75120]: 4.e scrub ok
Jan 20 19:07:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:02 compute-0 sudo[102556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nylvseyshcytrycjqrgbgeaihvrqhxot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936021.7011354-140-162609135688380/AnsiballZ_selinux.py'
Jan 20 19:07:02 compute-0 sudo[102556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:02 compute-0 ceph-mon[75120]: pgmap v248: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:02 compute-0 python3.9[102558]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 20 19:07:02 compute-0 sudo[102556]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:03 compute-0 sudo[102708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggdiffyqybrchuyqbtjergmurfaqttrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936022.897095-151-149698424355889/AnsiballZ_command.py'
Jan 20 19:07:03 compute-0 sudo[102708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:03 compute-0 python3.9[102710]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 20 19:07:03 compute-0 sudo[102708]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:03 compute-0 sudo[102860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bajcrsylfjwjvrkcuvqydqxfrwcyfqkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936023.5446062-159-43896110659712/AnsiballZ_file.py'
Jan 20 19:07:03 compute-0 sudo[102860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:03 compute-0 python3.9[102862]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:07:03 compute-0 sudo[102860]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:04 compute-0 sudo[103012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxmblappwxtkguszjglfgqromyiomkte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936024.118575-167-268503427543672/AnsiballZ_mount.py'
Jan 20 19:07:04 compute-0 sudo[103012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:04 compute-0 ceph-mon[75120]: pgmap v249: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:04 compute-0 python3.9[103014]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 20 19:07:04 compute-0 sudo[103012]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 20 19:07:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 20 19:07:05 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 20 19:07:05 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 20 19:07:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:05 compute-0 ceph-mon[75120]: 8.2 scrub starts
Jan 20 19:07:05 compute-0 ceph-mon[75120]: 8.2 scrub ok
Jan 20 19:07:05 compute-0 sudo[103164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbqnwpkrsszvgvixtgreszyvyrvvstia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936025.4534636-195-5084440073234/AnsiballZ_file.py'
Jan 20 19:07:05 compute-0 sudo[103164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:05 compute-0 python3.9[103166]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:07:05 compute-0 sudo[103164]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 20 19:07:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 20 19:07:06 compute-0 sudo[103316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwlgtwzwemijpupfhtbpwbnkftgralui ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936026.117161-203-245493789297051/AnsiballZ_stat.py'
Jan 20 19:07:06 compute-0 sudo[103316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:06 compute-0 python3.9[103318]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:07:06 compute-0 sudo[103316]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:06 compute-0 ceph-mon[75120]: 2.15 scrub starts
Jan 20 19:07:06 compute-0 ceph-mon[75120]: 2.15 scrub ok
Jan 20 19:07:06 compute-0 ceph-mon[75120]: pgmap v250: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:06 compute-0 sudo[103394]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icmdxknfvxgckwhjzqeyuodudssttssa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936026.117161-203-245493789297051/AnsiballZ_file.py'
Jan 20 19:07:06 compute-0 sudo[103394]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:06 compute-0 python3.9[103396]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:07:06 compute-0 sudo[103394]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:07 compute-0 sudo[103546]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uejurdwgddhprtqztdugywthwzlhkebi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936027.3534596-224-266188111447848/AnsiballZ_stat.py'
Jan 20 19:07:07 compute-0 sudo[103546]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:07 compute-0 python3.9[103548]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:07:07 compute-0 sudo[103546]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:07 compute-0 ceph-mon[75120]: 5.12 scrub starts
Jan 20 19:07:07 compute-0 ceph-mon[75120]: 5.12 scrub ok
Jan 20 19:07:07 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 20 19:07:07 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 20 19:07:08 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 20 19:07:08 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 20 19:07:08 compute-0 sudo[103700]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbuehrketljgddllssagcdavbgvtqtqc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936028.2007885-237-103588739851286/AnsiballZ_getent.py'
Jan 20 19:07:08 compute-0 sudo[103700]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:08 compute-0 python3.9[103702]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 20 19:07:08 compute-0 sudo[103700]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:08 compute-0 ceph-mon[75120]: pgmap v251: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:08 compute-0 ceph-mon[75120]: 10.19 scrub starts
Jan 20 19:07:08 compute-0 ceph-mon[75120]: 10.19 scrub ok
Jan 20 19:07:08 compute-0 ceph-mon[75120]: 4.1 scrub starts
Jan 20 19:07:08 compute-0 ceph-mon[75120]: 4.1 scrub ok
Jan 20 19:07:09 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 20 19:07:09 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 20 19:07:09 compute-0 sudo[103853]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndxszczhmajgpewmklnahhgqwrsecjbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936029.0428772-247-268581466112255/AnsiballZ_getent.py'
Jan 20 19:07:09 compute-0 sudo[103853]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:09 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 20 19:07:09 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 20 19:07:09 compute-0 python3.9[103855]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 20 19:07:09 compute-0 sudo[103853]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:09 compute-0 ceph-mon[75120]: 7.2 scrub starts
Jan 20 19:07:09 compute-0 ceph-mon[75120]: 7.2 scrub ok
Jan 20 19:07:09 compute-0 ceph-mon[75120]: 2.11 scrub starts
Jan 20 19:07:09 compute-0 ceph-mon[75120]: 2.11 scrub ok
Jan 20 19:07:09 compute-0 ceph-mon[75120]: pgmap v252: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:10 compute-0 sudo[104006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kquljcybhkcriddfaboztygvdriukvpg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936029.6454487-255-254419291404757/AnsiballZ_group.py'
Jan 20 19:07:10 compute-0 sudo[104006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:10 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 20 19:07:10 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 20 19:07:10 compute-0 python3.9[104008]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 19:07:10 compute-0 sudo[104006]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:10 compute-0 sudo[104158]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whxfllzftxiusuckvzycqbecabpkjqhu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936030.5002704-264-136776671991093/AnsiballZ_file.py'
Jan 20 19:07:10 compute-0 sudo[104158]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:10 compute-0 ceph-mon[75120]: 3.5 scrub starts
Jan 20 19:07:10 compute-0 ceph-mon[75120]: 3.5 scrub ok
Jan 20 19:07:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 20 19:07:10 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 20 19:07:10 compute-0 python3.9[104160]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 20 19:07:10 compute-0 sudo[104158]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:11 compute-0 sudo[104310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zropohifesoblmjuozdwafgzeoltqpqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936031.301301-275-136466464333376/AnsiballZ_dnf.py'
Jan 20 19:07:11 compute-0 sudo[104310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:11 compute-0 python3.9[104312]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:07:11 compute-0 ceph-mon[75120]: 5.16 scrub starts
Jan 20 19:07:11 compute-0 ceph-mon[75120]: 5.16 scrub ok
Jan 20 19:07:11 compute-0 ceph-mon[75120]: pgmap v253: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:11 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 20 19:07:11 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 20 19:07:12 compute-0 ceph-mon[75120]: 2.d scrub starts
Jan 20 19:07:12 compute-0 ceph-mon[75120]: 2.d scrub ok
Jan 20 19:07:13 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 20 19:07:13 compute-0 sudo[104310]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:13 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 20 19:07:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:13 compute-0 sudo[104463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgihrsswjajiojprozznsntzeclsevqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936033.3496685-283-35828152882279/AnsiballZ_file.py'
Jan 20 19:07:13 compute-0 sudo[104463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:13 compute-0 python3.9[104465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:07:13 compute-0 sudo[104463]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:13 compute-0 ceph-mon[75120]: 8.11 scrub starts
Jan 20 19:07:13 compute-0 ceph-mon[75120]: 8.11 scrub ok
Jan 20 19:07:13 compute-0 ceph-mon[75120]: pgmap v254: 305 pgs: 1 active+clean+scrubbing, 304 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:14 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 20 19:07:14 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 20 19:07:14 compute-0 sshd-session[104466]: Invalid user banx from 45.148.10.240 port 47580
Jan 20 19:07:14 compute-0 sshd-session[104466]: Connection closed by invalid user banx 45.148.10.240 port 47580 [preauth]
Jan 20 19:07:14 compute-0 sudo[104617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cparbibpisprfdasfadgcxrasfngufme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936034.0790699-291-113752735719891/AnsiballZ_stat.py'
Jan 20 19:07:14 compute-0 sudo[104617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:14 compute-0 python3.9[104619]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:07:14 compute-0 sudo[104617]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:14 compute-0 sudo[104695]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-piasbhfibcxjotbunpacdyqvbzqcivjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936034.0790699-291-113752735719891/AnsiballZ_file.py'
Jan 20 19:07:14 compute-0 sudo[104695]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:14 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 20 19:07:14 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 20 19:07:14 compute-0 ceph-mon[75120]: 11.9 scrub starts
Jan 20 19:07:14 compute-0 ceph-mon[75120]: 11.9 scrub ok
Jan 20 19:07:14 compute-0 python3.9[104697]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:07:15 compute-0 sudo[104695]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:15 compute-0 sudo[104847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzdnvqsevakrnraemjzjqbeoixovfjdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936035.1647732-304-181184891808602/AnsiballZ_stat.py'
Jan 20 19:07:15 compute-0 sudo[104847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:15 compute-0 python3.9[104849]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:07:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:15 compute-0 sudo[104847]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:15 compute-0 sudo[104925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsxnkntzfdpzxzmadqkzhxwqfathonin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936035.1647732-304-181184891808602/AnsiballZ_file.py'
Jan 20 19:07:15 compute-0 sudo[104925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:15 compute-0 ceph-mon[75120]: 10.6 scrub starts
Jan 20 19:07:15 compute-0 ceph-mon[75120]: 10.6 scrub ok
Jan 20 19:07:15 compute-0 ceph-mon[75120]: pgmap v255: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:16 compute-0 python3.9[104927]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:07:16 compute-0 sudo[104925]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:16 compute-0 sudo[105077]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgghfsvxighxkifmvjfkqlzmlmqkuirk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936036.3223653-319-106415139639114/AnsiballZ_dnf.py'
Jan 20 19:07:16 compute-0 sudo[105077]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:16 compute-0 python3.9[105079]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:07:17 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 20 19:07:17 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 20 19:07:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:17 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 20 19:07:17 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 20 19:07:18 compute-0 sudo[105077]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 20 19:07:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 20 19:07:18 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 20 19:07:18 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 20 19:07:18 compute-0 ceph-mon[75120]: 3.c scrub starts
Jan 20 19:07:18 compute-0 ceph-mon[75120]: 3.c scrub ok
Jan 20 19:07:18 compute-0 ceph-mon[75120]: pgmap v256: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:18 compute-0 ceph-mon[75120]: 8.d scrub starts
Jan 20 19:07:18 compute-0 ceph-mon[75120]: 8.d scrub ok
Jan 20 19:07:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:18 compute-0 python3.9[105230]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:07:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:19 compute-0 ceph-mon[75120]: 5.13 scrub starts
Jan 20 19:07:19 compute-0 ceph-mon[75120]: 5.13 scrub ok
Jan 20 19:07:19 compute-0 ceph-mon[75120]: 3.1 scrub starts
Jan 20 19:07:19 compute-0 ceph-mon[75120]: 3.1 scrub ok
Jan 20 19:07:19 compute-0 python3.9[105382]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 20 19:07:20 compute-0 python3.9[105532]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:07:20 compute-0 ceph-mon[75120]: pgmap v257: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:21 compute-0 sudo[105682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfugecfexdksigojxgncswaarwjqedyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936040.7928283-360-140996395698866/AnsiballZ_systemd.py'
Jan 20 19:07:21 compute-0 sudo[105682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:21 compute-0 python3.9[105684]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:07:21 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 20 19:07:21 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 20 19:07:21 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 20 19:07:21 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 20 19:07:22 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 20 19:07:22 compute-0 sudo[105682]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:22 compute-0 python3.9[105846]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 20 19:07:22 compute-0 ceph-mon[75120]: pgmap v258: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:23 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 20 19:07:23 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 20 19:07:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:23 compute-0 ceph-mon[75120]: 7.18 scrub starts
Jan 20 19:07:23 compute-0 ceph-mon[75120]: 7.18 scrub ok
Jan 20 19:07:24 compute-0 sudo[105996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcewqcwvxaohtaobncplkuwtfxebdnyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936044.0168407-417-87133067622289/AnsiballZ_systemd.py'
Jan 20 19:07:24 compute-0 sudo[105996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:24 compute-0 python3.9[105998]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:07:24 compute-0 sudo[105996]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:24 compute-0 ceph-mon[75120]: pgmap v259: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 20 19:07:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 20 19:07:25 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 20 19:07:25 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 20 19:07:25 compute-0 sudo[106150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jonazfvjnracgmzhgpgiimqusoiqkami ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936044.803405-417-36054674091605/AnsiballZ_systemd.py'
Jan 20 19:07:25 compute-0 sudo[106150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:25 compute-0 python3.9[106152]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:07:25 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 20 19:07:25 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 20 19:07:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:25 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 20 19:07:25 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 20 19:07:25 compute-0 ceph-mon[75120]: 2.3 scrub starts
Jan 20 19:07:25 compute-0 ceph-mon[75120]: 2.3 scrub ok
Jan 20 19:07:25 compute-0 ceph-mon[75120]: 7.e scrub starts
Jan 20 19:07:25 compute-0 ceph-mon[75120]: 7.e scrub ok
Jan 20 19:07:25 compute-0 ceph-mon[75120]: 11.14 scrub starts
Jan 20 19:07:25 compute-0 ceph-mon[75120]: 11.14 scrub ok
Jan 20 19:07:26 compute-0 sudo[106150]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:26 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 20 19:07:26 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 20 19:07:26 compute-0 ceph-mon[75120]: pgmap v260: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:26 compute-0 ceph-mon[75120]: 2.a scrub starts
Jan 20 19:07:26 compute-0 ceph-mon[75120]: 2.a scrub ok
Jan 20 19:07:26 compute-0 sshd-session[99446]: Connection closed by 192.168.122.30 port 39384
Jan 20 19:07:26 compute-0 sshd-session[99443]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:07:26 compute-0 systemd[1]: session-35.scope: Deactivated successfully.
Jan 20 19:07:26 compute-0 systemd[1]: session-35.scope: Consumed 1min 6.760s CPU time.
Jan 20 19:07:26 compute-0 systemd-logind[797]: Session 35 logged out. Waiting for processes to exit.
Jan 20 19:07:26 compute-0 systemd-logind[797]: Removed session 35.
Jan 20 19:07:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:27 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 20 19:07:27 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 20 19:07:27 compute-0 ceph-mon[75120]: 2.5 scrub starts
Jan 20 19:07:27 compute-0 ceph-mon[75120]: 2.5 scrub ok
Jan 20 19:07:27 compute-0 ceph-mon[75120]: pgmap v261: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:27 compute-0 ceph-mon[75120]: 5.c scrub starts
Jan 20 19:07:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 20 19:07:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 20 19:07:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:28 compute-0 ceph-mon[75120]: 5.c scrub ok
Jan 20 19:07:28 compute-0 ceph-mon[75120]: 7.9 scrub starts
Jan 20 19:07:28 compute-0 ceph-mon[75120]: 7.9 scrub ok
Jan 20 19:07:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:29 compute-0 ceph-mon[75120]: pgmap v262: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 20 19:07:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 20 19:07:31 compute-0 ceph-mon[75120]: 10.b scrub starts
Jan 20 19:07:31 compute-0 ceph-mon[75120]: 10.b scrub ok
Jan 20 19:07:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:07:31
Jan 20 19:07:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:07:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:07:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['images', 'default.rgw.control', 'backups', '.mgr', 'volumes', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta']
Jan 20 19:07:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:07:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:31 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 20 19:07:31 compute-0 sshd-session[106180]: Accepted publickey for zuul from 192.168.122.30 port 51676 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:07:31 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 20 19:07:31 compute-0 systemd-logind[797]: New session 36 of user zuul.
Jan 20 19:07:31 compute-0 systemd[1]: Started Session 36 of User zuul.
Jan 20 19:07:32 compute-0 sshd-session[106180]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:07:32 compute-0 ceph-mon[75120]: pgmap v263: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:32 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 20 19:07:32 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 20 19:07:32 compute-0 python3.9[106333]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:07:33 compute-0 ceph-mon[75120]: 7.5 scrub starts
Jan 20 19:07:33 compute-0 ceph-mon[75120]: 7.5 scrub ok
Jan 20 19:07:33 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 20 19:07:33 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 20 19:07:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:33 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 20 19:07:33 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 20 19:07:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:33 compute-0 sudo[106487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czazeizfocedzqqsoirorlxyuoobpkfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936053.508922-31-224603381184702/AnsiballZ_getent.py'
Jan 20 19:07:33 compute-0 sudo[106487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:34 compute-0 ceph-mon[75120]: 11.2 scrub starts
Jan 20 19:07:34 compute-0 ceph-mon[75120]: 11.2 scrub ok
Jan 20 19:07:34 compute-0 ceph-mon[75120]: 2.13 scrub starts
Jan 20 19:07:34 compute-0 ceph-mon[75120]: 2.13 scrub ok
Jan 20 19:07:34 compute-0 ceph-mon[75120]: pgmap v264: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:34 compute-0 ceph-mon[75120]: 2.4 scrub starts
Jan 20 19:07:34 compute-0 ceph-mon[75120]: 2.4 scrub ok
Jan 20 19:07:34 compute-0 python3.9[106489]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 20 19:07:34 compute-0 sudo[106487]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:07:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:07:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 20 19:07:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 20 19:07:34 compute-0 sudo[106640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nglcnkbbfutirvguuqkxahzxucgbqjyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936054.4636025-43-246318959256386/AnsiballZ_setup.py'
Jan 20 19:07:34 compute-0 sudo[106640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:35 compute-0 python3.9[106642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:07:35 compute-0 ceph-mon[75120]: 2.7 scrub starts
Jan 20 19:07:35 compute-0 ceph-mon[75120]: 2.7 scrub ok
Jan 20 19:07:35 compute-0 sudo[106640]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:35 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 20 19:07:35 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 20 19:07:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:35 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 20 19:07:35 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 20 19:07:35 compute-0 sudo[106724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duyfwoileefpcrurmwudthqxzsoxmraq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936054.4636025-43-246318959256386/AnsiballZ_dnf.py'
Jan 20 19:07:35 compute-0 sudo[106724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:35 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 20 19:07:35 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 20 19:07:35 compute-0 python3.9[106726]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 19:07:36 compute-0 ceph-mon[75120]: 2.1d scrub starts
Jan 20 19:07:36 compute-0 ceph-mon[75120]: 2.1d scrub ok
Jan 20 19:07:36 compute-0 ceph-mon[75120]: pgmap v265: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:36 compute-0 ceph-mon[75120]: 10.11 scrub starts
Jan 20 19:07:36 compute-0 ceph-mon[75120]: 10.11 scrub ok
Jan 20 19:07:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 20 19:07:36 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 20 19:07:36 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 20 19:07:36 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 20 19:07:37 compute-0 ceph-mon[75120]: 4.a scrub starts
Jan 20 19:07:37 compute-0 ceph-mon[75120]: 4.a scrub ok
Jan 20 19:07:37 compute-0 ceph-mon[75120]: 10.f scrub starts
Jan 20 19:07:37 compute-0 ceph-mon[75120]: 10.f scrub ok
Jan 20 19:07:37 compute-0 sudo[106724]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:37 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 20 19:07:37 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 20 19:07:37 compute-0 sudo[106877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzoaygwcriqchfabpftiorjiwxoawpge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936057.4933069-57-256625554907705/AnsiballZ_dnf.py'
Jan 20 19:07:37 compute-0 sudo[106877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:37 compute-0 python3.9[106879]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:07:38 compute-0 ceph-mon[75120]: 7.8 scrub starts
Jan 20 19:07:38 compute-0 ceph-mon[75120]: 7.8 scrub ok
Jan 20 19:07:38 compute-0 ceph-mon[75120]: pgmap v266: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:38 compute-0 ceph-mon[75120]: 5.f scrub starts
Jan 20 19:07:38 compute-0 ceph-mon[75120]: 5.f scrub ok
Jan 20 19:07:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 20 19:07:38 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 20 19:07:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:39 compute-0 sudo[106877]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:39 compute-0 ceph-mon[75120]: 10.10 scrub starts
Jan 20 19:07:39 compute-0 ceph-mon[75120]: 10.10 scrub ok
Jan 20 19:07:39 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 20 19:07:39 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 20 19:07:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 20 19:07:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 20 19:07:39 compute-0 sudo[107030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olooeayjgqczyjdzulhwbzgokiwaqqtn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936059.4192622-65-34816366724540/AnsiballZ_systemd.py'
Jan 20 19:07:40 compute-0 sudo[107030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:40 compute-0 ceph-mon[75120]: 8.9 scrub starts
Jan 20 19:07:40 compute-0 ceph-mon[75120]: 8.9 scrub ok
Jan 20 19:07:40 compute-0 ceph-mon[75120]: pgmap v267: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:40 compute-0 ceph-mon[75120]: 2.6 scrub starts
Jan 20 19:07:40 compute-0 ceph-mon[75120]: 2.6 scrub ok
Jan 20 19:07:40 compute-0 python3.9[107032]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 19:07:40 compute-0 sudo[107030]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:40 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 20 19:07:40 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 20 19:07:41 compute-0 python3.9[107185]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:07:41 compute-0 ceph-mon[75120]: 8.4 scrub starts
Jan 20 19:07:41 compute-0 ceph-mon[75120]: 8.4 scrub ok
Jan 20 19:07:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:41 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 20 19:07:41 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 20 19:07:41 compute-0 sudo[107335]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knwpsddftnsamqwbqprgsewmoubgdofa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936061.5529137-83-62660933012472/AnsiballZ_sefcontext.py'
Jan 20 19:07:41 compute-0 sudo[107335]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:42 compute-0 python3.9[107337]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 20 19:07:42 compute-0 sudo[107335]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:42 compute-0 ceph-mon[75120]: pgmap v268: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:42 compute-0 ceph-mon[75120]: 7.15 scrub starts
Jan 20 19:07:42 compute-0 ceph-mon[75120]: 7.15 scrub ok
Jan 20 19:07:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 20 19:07:42 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 20 19:07:43 compute-0 python3.9[107487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:07:43 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 20 19:07:43 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 20 19:07:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:43 compute-0 sudo[107643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrqbobpyrpljnpqpejyhlqjtwldilqcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936063.5522087-101-76234349094097/AnsiballZ_dnf.py'
Jan 20 19:07:43 compute-0 sudo[107643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:44 compute-0 python3.9[107645]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:07:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:07:44 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 20 19:07:44 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 20 19:07:44 compute-0 ceph-mon[75120]: 10.2 scrub starts
Jan 20 19:07:44 compute-0 ceph-mon[75120]: 10.2 scrub ok
Jan 20 19:07:44 compute-0 ceph-mon[75120]: 3.3 scrub starts
Jan 20 19:07:44 compute-0 ceph-mon[75120]: 3.3 scrub ok
Jan 20 19:07:44 compute-0 ceph-mon[75120]: pgmap v269: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 20 19:07:44 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 20 19:07:45 compute-0 sudo[107643]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:45 compute-0 ceph-mon[75120]: 11.e scrub starts
Jan 20 19:07:45 compute-0 ceph-mon[75120]: 11.e scrub ok
Jan 20 19:07:45 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 20 19:07:45 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 20 19:07:45 compute-0 sudo[107796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqhovacewcyltlkntribjekejhaafqwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936065.5059712-109-189624709124078/AnsiballZ_command.py'
Jan 20 19:07:45 compute-0 sudo[107796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:46 compute-0 python3.9[107798]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:07:46 compute-0 sudo[107805]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:07:46 compute-0 sudo[107805]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:46 compute-0 sudo[107805]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:46 compute-0 sudo[107840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:07:46 compute-0 sudo[107840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:46 compute-0 ceph-mon[75120]: 5.1a scrub starts
Jan 20 19:07:46 compute-0 ceph-mon[75120]: 5.1a scrub ok
Jan 20 19:07:46 compute-0 ceph-mon[75120]: pgmap v270: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:46 compute-0 ceph-mon[75120]: 3.e scrub starts
Jan 20 19:07:46 compute-0 ceph-mon[75120]: 3.e scrub ok
Jan 20 19:07:46 compute-0 sudo[107796]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:47 compute-0 sudo[107840]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:07:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:07:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:07:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:07:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:07:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:07:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:07:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:07:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:07:47 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:07:47 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:07:47 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:07:47 compute-0 sudo[108091]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:07:47 compute-0 sudo[108091]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:47 compute-0 sudo[108091]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:47 compute-0 sudo[108116]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:07:47 compute-0 sudo[108116]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:47 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 20 19:07:47 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 20 19:07:47 compute-0 sudo[108214]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vazwebnfabshwyrkhqiwguntsduvkcdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936067.075422-117-220333465695360/AnsiballZ_file.py'
Jan 20 19:07:47 compute-0 sudo[108214]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:47 compute-0 podman[108229]: 2026-01-20 19:07:47.689533543 +0000 UTC m=+0.051331431 container create 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:07:47 compute-0 systemd[76564]: Created slice User Background Tasks Slice.
Jan 20 19:07:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:07:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:07:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:07:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:07:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:07:47 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:07:47 compute-0 systemd[76564]: Starting Cleanup of User's Temporary Files and Directories...
Jan 20 19:07:47 compute-0 systemd[76564]: Finished Cleanup of User's Temporary Files and Directories.
Jan 20 19:07:47 compute-0 python3.9[108216]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 20 19:07:47 compute-0 systemd[1]: Started libpod-conmon-97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d.scope.
Jan 20 19:07:47 compute-0 podman[108229]: 2026-01-20 19:07:47.666279268 +0000 UTC m=+0.028076986 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:07:47 compute-0 sudo[108214]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:47 compute-0 podman[108229]: 2026-01-20 19:07:47.805399097 +0000 UTC m=+0.167196775 container init 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:07:47 compute-0 podman[108229]: 2026-01-20 19:07:47.813391972 +0000 UTC m=+0.175189650 container start 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:07:47 compute-0 podman[108229]: 2026-01-20 19:07:47.816734472 +0000 UTC m=+0.178532160 container attach 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:07:47 compute-0 reverent_banach[108246]: 167 167
Jan 20 19:07:47 compute-0 systemd[1]: libpod-97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d.scope: Deactivated successfully.
Jan 20 19:07:47 compute-0 podman[108229]: 2026-01-20 19:07:47.824281735 +0000 UTC m=+0.186079413 container died 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle)
Jan 20 19:07:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-10499866c0f0acfe4bb63cf09fa0917ee0e1d1138ca11a78c6c99064035baa41-merged.mount: Deactivated successfully.
Jan 20 19:07:47 compute-0 podman[108229]: 2026-01-20 19:07:47.881461282 +0000 UTC m=+0.243258950 container remove 97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_banach, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:07:47 compute-0 systemd[1]: libpod-conmon-97c2088c518b72c385a49cd91597319b729030a0515774c57e65f22c2fbf9d0d.scope: Deactivated successfully.
Jan 20 19:07:48 compute-0 podman[108341]: 2026-01-20 19:07:48.066528976 +0000 UTC m=+0.061546565 container create 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Jan 20 19:07:48 compute-0 systemd[1]: Started libpod-conmon-204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a.scope.
Jan 20 19:07:48 compute-0 podman[108341]: 2026-01-20 19:07:48.036662234 +0000 UTC m=+0.031679843 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:07:48 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:48 compute-0 podman[108341]: 2026-01-20 19:07:48.228339376 +0000 UTC m=+0.223356955 container init 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:07:48 compute-0 podman[108341]: 2026-01-20 19:07:48.240575504 +0000 UTC m=+0.235593053 container start 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, ceph=True, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Jan 20 19:07:48 compute-0 podman[108341]: 2026-01-20 19:07:48.244579402 +0000 UTC m=+0.239596971 container attach 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:07:48 compute-0 python3.9[108440]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:07:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:48 compute-0 ceph-mon[75120]: 3.6 scrub starts
Jan 20 19:07:48 compute-0 ceph-mon[75120]: 3.6 scrub ok
Jan 20 19:07:48 compute-0 ceph-mon[75120]: pgmap v271: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:48 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 20 19:07:48 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 20 19:07:48 compute-0 frosty_varahamihira[108362]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:07:48 compute-0 frosty_varahamihira[108362]: --> All data devices are unavailable
Jan 20 19:07:48 compute-0 systemd[1]: libpod-204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a.scope: Deactivated successfully.
Jan 20 19:07:48 compute-0 podman[108341]: 2026-01-20 19:07:48.813052562 +0000 UTC m=+0.808070141 container died 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:07:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-681a5dfb404f52e52c78b619f40fe1eaf100c6f4a9c56a3ffb8948d8509e7780-merged.mount: Deactivated successfully.
Jan 20 19:07:48 compute-0 podman[108341]: 2026-01-20 19:07:48.863756395 +0000 UTC m=+0.858773944 container remove 204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 19:07:48 compute-0 systemd[1]: libpod-conmon-204906e7553e22b6c041b4b38b3e7c0010609820749c08b8f53b5530be57093a.scope: Deactivated successfully.
Jan 20 19:07:48 compute-0 sudo[108116]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:48 compute-0 sudo[108568]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:07:48 compute-0 sudo[108568]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:48 compute-0 sudo[108568]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:49 compute-0 sudo[108666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdykmlfbjfhogiqyfmzwnwmddiylaafr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936068.7486346-133-263792023973139/AnsiballZ_dnf.py'
Jan 20 19:07:49 compute-0 sudo[108666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:49 compute-0 sudo[108622]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:07:49 compute-0 sudo[108622]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:49 compute-0 python3.9[108669]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:07:49 compute-0 podman[108683]: 2026-01-20 19:07:49.282331186 +0000 UTC m=+0.040647024 container create 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:07:49 compute-0 systemd[1]: Started libpod-conmon-106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514.scope.
Jan 20 19:07:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:49 compute-0 podman[108683]: 2026-01-20 19:07:49.356675394 +0000 UTC m=+0.114991242 container init 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:07:49 compute-0 podman[108683]: 2026-01-20 19:07:49.264691752 +0000 UTC m=+0.023007590 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:07:49 compute-0 podman[108683]: 2026-01-20 19:07:49.363377904 +0000 UTC m=+0.121693702 container start 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:07:49 compute-0 podman[108683]: 2026-01-20 19:07:49.366440106 +0000 UTC m=+0.124755954 container attach 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:07:49 compute-0 admiring_goodall[108700]: 167 167
Jan 20 19:07:49 compute-0 systemd[1]: libpod-106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514.scope: Deactivated successfully.
Jan 20 19:07:49 compute-0 podman[108683]: 2026-01-20 19:07:49.386547588 +0000 UTC m=+0.144863386 container died 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 19:07:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-b68e802ab3ea2a5b833c2a0d6155d9ca873c14a3b9549c052ddbb9715daa91d6-merged.mount: Deactivated successfully.
Jan 20 19:07:49 compute-0 podman[108683]: 2026-01-20 19:07:49.429550613 +0000 UTC m=+0.187866411 container remove 106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_goodall, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:07:49 compute-0 systemd[1]: libpod-conmon-106ea2a2acdec9a0311ea6eb52ec1eda7f914f9c90502ca4a573d60042366514.scope: Deactivated successfully.
Jan 20 19:07:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:49 compute-0 podman[108724]: 2026-01-20 19:07:49.660054709 +0000 UTC m=+0.077180296 container create b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:07:49 compute-0 systemd[1]: Started libpod-conmon-b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6.scope.
Jan 20 19:07:49 compute-0 podman[108724]: 2026-01-20 19:07:49.628606334 +0000 UTC m=+0.045731971 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:07:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:49 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:49 compute-0 podman[108724]: 2026-01-20 19:07:49.758725331 +0000 UTC m=+0.175850958 container init b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 19:07:49 compute-0 podman[108724]: 2026-01-20 19:07:49.76540948 +0000 UTC m=+0.182535027 container start b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 19:07:49 compute-0 podman[108724]: 2026-01-20 19:07:49.769680895 +0000 UTC m=+0.186806532 container attach b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]: {
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:     "0": [
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:         {
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "devices": [
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "/dev/loop3"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             ],
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_name": "ceph_lv0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_size": "21470642176",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "name": "ceph_lv0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "tags": {
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cluster_name": "ceph",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.crush_device_class": "",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.encrypted": "0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.objectstore": "bluestore",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osd_id": "0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.type": "block",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.vdo": "0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.with_tpm": "0"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             },
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "type": "block",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "vg_name": "ceph_vg0"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:         }
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:     ],
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:     "1": [
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:         {
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "devices": [
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "/dev/loop4"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             ],
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_name": "ceph_lv1",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_size": "21470642176",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "name": "ceph_lv1",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "tags": {
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cluster_name": "ceph",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.crush_device_class": "",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.encrypted": "0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.objectstore": "bluestore",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osd_id": "1",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.type": "block",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.vdo": "0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.with_tpm": "0"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             },
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "type": "block",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "vg_name": "ceph_vg1"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:         }
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:     ],
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:     "2": [
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:         {
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "devices": [
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "/dev/loop5"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             ],
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_name": "ceph_lv2",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_size": "21470642176",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "name": "ceph_lv2",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "tags": {
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.cluster_name": "ceph",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.crush_device_class": "",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.encrypted": "0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.objectstore": "bluestore",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osd_id": "2",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.type": "block",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.vdo": "0",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:                 "ceph.with_tpm": "0"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             },
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "type": "block",
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:             "vg_name": "ceph_vg2"
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:         }
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]:     ]
Jan 20 19:07:50 compute-0 pedantic_chandrasekhar[108740]: }
Jan 20 19:07:50 compute-0 systemd[1]: libpod-b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6.scope: Deactivated successfully.
Jan 20 19:07:50 compute-0 podman[108724]: 2026-01-20 19:07:50.060963905 +0000 UTC m=+0.478089472 container died b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 19:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-501201f23acb40a1e51c7081607ec2bdf8d3991fec2e4243e9b7aa67aa1e2156-merged.mount: Deactivated successfully.
Jan 20 19:07:50 compute-0 podman[108724]: 2026-01-20 19:07:50.107253409 +0000 UTC m=+0.524378946 container remove b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pedantic_chandrasekhar, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:07:50 compute-0 systemd[1]: libpod-conmon-b87d87bf4c12afa6355279cf44934133e69684b9c832bb0209b010d38f6ad4e6.scope: Deactivated successfully.
Jan 20 19:07:50 compute-0 sudo[108622]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:50 compute-0 sudo[108761]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:07:50 compute-0 sudo[108761]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:50 compute-0 sudo[108761]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:50 compute-0 sudo[108786]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:07:50 compute-0 sudo[108786]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:50 compute-0 sudo[108666]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:50 compute-0 podman[108824]: 2026-01-20 19:07:50.558591411 +0000 UTC m=+0.046972474 container create b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 19:07:50 compute-0 systemd[1]: Started libpod-conmon-b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347.scope.
Jan 20 19:07:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:50 compute-0 podman[108824]: 2026-01-20 19:07:50.539386344 +0000 UTC m=+0.027767417 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:07:50 compute-0 podman[108824]: 2026-01-20 19:07:50.63633332 +0000 UTC m=+0.124714403 container init b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:07:50 compute-0 podman[108824]: 2026-01-20 19:07:50.644455098 +0000 UTC m=+0.132836151 container start b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:07:50 compute-0 podman[108824]: 2026-01-20 19:07:50.647739276 +0000 UTC m=+0.136120329 container attach b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:07:50 compute-0 gallant_bhaskara[108864]: 167 167
Jan 20 19:07:50 compute-0 systemd[1]: libpod-b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347.scope: Deactivated successfully.
Jan 20 19:07:50 compute-0 conmon[108864]: conmon b939e00b8435b790d46b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347.scope/container/memory.events
Jan 20 19:07:50 compute-0 podman[108824]: 2026-01-20 19:07:50.650043609 +0000 UTC m=+0.138424672 container died b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:07:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-b32ea5887c719c4f03b956ce62789ecc5651834a0055ec22c80fe830ce4983f3-merged.mount: Deactivated successfully.
Jan 20 19:07:50 compute-0 podman[108824]: 2026-01-20 19:07:50.694580116 +0000 UTC m=+0.182961169 container remove b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_bhaskara, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 19:07:50 compute-0 systemd[1]: libpod-conmon-b939e00b8435b790d46b81bc22f77bbb56b0106eecf5a20382d0b1d882a7e347.scope: Deactivated successfully.
Jan 20 19:07:50 compute-0 ceph-mon[75120]: 5.1 scrub starts
Jan 20 19:07:50 compute-0 ceph-mon[75120]: 5.1 scrub ok
Jan 20 19:07:50 compute-0 ceph-mon[75120]: pgmap v272: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:50 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 20 19:07:50 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 20 19:07:50 compute-0 podman[108943]: 2026-01-20 19:07:50.862868749 +0000 UTC m=+0.040523640 container create 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:07:50 compute-0 systemd[1]: Started libpod-conmon-9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e.scope.
Jan 20 19:07:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:07:50 compute-0 podman[108943]: 2026-01-20 19:07:50.940058744 +0000 UTC m=+0.117713655 container init 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:07:50 compute-0 podman[108943]: 2026-01-20 19:07:50.845086011 +0000 UTC m=+0.022740922 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:07:50 compute-0 podman[108943]: 2026-01-20 19:07:50.948026718 +0000 UTC m=+0.125681609 container start 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:07:50 compute-0 podman[108943]: 2026-01-20 19:07:50.951864212 +0000 UTC m=+0.129519103 container attach 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030)
Jan 20 19:07:50 compute-0 sudo[109034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mefajzonzqbpkpucirisvazvuzlszrou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936070.7192013-142-24237670708440/AnsiballZ_dnf.py'
Jan 20 19:07:50 compute-0 sudo[109034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:51 compute-0 python3.9[109036]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:07:51 compute-0 lvm[109109]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:07:51 compute-0 lvm[109112]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:07:51 compute-0 lvm[109112]: VG ceph_vg1 finished
Jan 20 19:07:51 compute-0 lvm[109109]: VG ceph_vg0 finished
Jan 20 19:07:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:51 compute-0 lvm[109114]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:07:51 compute-0 lvm[109114]: VG ceph_vg2 finished
Jan 20 19:07:51 compute-0 focused_bell[108998]: {}
Jan 20 19:07:51 compute-0 systemd[1]: libpod-9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e.scope: Deactivated successfully.
Jan 20 19:07:51 compute-0 podman[108943]: 2026-01-20 19:07:51.758574505 +0000 UTC m=+0.936229396 container died 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:07:51 compute-0 systemd[1]: libpod-9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e.scope: Consumed 1.297s CPU time.
Jan 20 19:07:51 compute-0 ceph-mon[75120]: 5.19 scrub starts
Jan 20 19:07:51 compute-0 ceph-mon[75120]: 5.19 scrub ok
Jan 20 19:07:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-67587eec9af06500bef87b0a431593dcc058f6f8a8d598db4316ad231725f2c3-merged.mount: Deactivated successfully.
Jan 20 19:07:51 compute-0 podman[108943]: 2026-01-20 19:07:51.799923806 +0000 UTC m=+0.977578697 container remove 9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_bell, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:07:51 compute-0 systemd[1]: libpod-conmon-9b9a5b4df551697bc69d07b8891a5d0537a7691d9a482287c386eddc43e6604e.scope: Deactivated successfully.
Jan 20 19:07:51 compute-0 sudo[108786]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:07:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:07:51 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:07:51 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:07:51 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 20 19:07:51 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 20 19:07:51 compute-0 sudo[109130]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:07:51 compute-0 sudo[109130]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:07:51 compute-0 sudo[109130]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:52 compute-0 sudo[109034]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:52 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 20 19:07:52 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 20 19:07:52 compute-0 ceph-mon[75120]: pgmap v273: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:07:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:07:52 compute-0 ceph-mon[75120]: 7.a scrub starts
Jan 20 19:07:52 compute-0 ceph-mon[75120]: 7.a scrub ok
Jan 20 19:07:52 compute-0 sudo[109304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hadsxdktnzlfsqhjecrriurkxvpmxpfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936072.7240412-154-146657281374691/AnsiballZ_stat.py'
Jan 20 19:07:52 compute-0 sudo[109304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:53 compute-0 python3.9[109306]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:07:53 compute-0 sudo[109304]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:53 compute-0 ceph-mon[75120]: 5.9 scrub starts
Jan 20 19:07:53 compute-0 ceph-mon[75120]: 5.9 scrub ok
Jan 20 19:07:53 compute-0 sudo[109458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfbrzipjxdcuvbkbrfeukgzeozijixjj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936073.3664036-162-277673894315887/AnsiballZ_slurp.py'
Jan 20 19:07:53 compute-0 sudo[109458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:07:53 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 20 19:07:53 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 20 19:07:54 compute-0 python3.9[109460]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Jan 20 19:07:54 compute-0 sudo[109458]: pam_unix(sudo:session): session closed for user root
Jan 20 19:07:54 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 20 19:07:54 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 20 19:07:54 compute-0 ceph-mon[75120]: pgmap v274: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:54 compute-0 ceph-mon[75120]: 3.11 scrub starts
Jan 20 19:07:54 compute-0 ceph-mon[75120]: 3.11 scrub ok
Jan 20 19:07:54 compute-0 ceph-mon[75120]: 8.e scrub starts
Jan 20 19:07:54 compute-0 ceph-mon[75120]: 8.e scrub ok
Jan 20 19:07:54 compute-0 sshd-session[106183]: Connection closed by 192.168.122.30 port 51676
Jan 20 19:07:54 compute-0 sshd-session[106180]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:07:54 compute-0 systemd-logind[797]: Session 36 logged out. Waiting for processes to exit.
Jan 20 19:07:54 compute-0 systemd[1]: session-36.scope: Deactivated successfully.
Jan 20 19:07:54 compute-0 systemd[1]: session-36.scope: Consumed 17.940s CPU time.
Jan 20 19:07:54 compute-0 systemd-logind[797]: Removed session 36.
Jan 20 19:07:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 20 19:07:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 20 19:07:56 compute-0 ceph-mon[75120]: pgmap v275: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:56 compute-0 ceph-mon[75120]: 2.1b scrub starts
Jan 20 19:07:56 compute-0 ceph-mon[75120]: 2.1b scrub ok
Jan 20 19:07:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:57 compute-0 ceph-mon[75120]: pgmap v276: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:07:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:07:59 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 20 19:07:59 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 20 19:08:00 compute-0 sshd-session[109485]: Accepted publickey for zuul from 192.168.122.30 port 54382 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:08:00 compute-0 systemd-logind[797]: New session 37 of user zuul.
Jan 20 19:08:00 compute-0 systemd[1]: Started Session 37 of User zuul.
Jan 20 19:08:00 compute-0 sshd-session[109485]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:08:00 compute-0 ceph-mon[75120]: pgmap v277: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:00 compute-0 ceph-mon[75120]: 8.1b scrub starts
Jan 20 19:08:00 compute-0 ceph-mon[75120]: 8.1b scrub ok
Jan 20 19:08:01 compute-0 python3.9[109638]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:08:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:01 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 20 19:08:01 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 20 19:08:02 compute-0 python3.9[109792]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:08:02 compute-0 ceph-mon[75120]: pgmap v278: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:02 compute-0 ceph-mon[75120]: 7.11 scrub starts
Jan 20 19:08:02 compute-0 ceph-mon[75120]: 7.11 scrub ok
Jan 20 19:08:02 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 20 19:08:02 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 20 19:08:02 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 20 19:08:02 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 20 19:08:03 compute-0 python3.9[109985]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:08:03 compute-0 sshd-session[109488]: Connection closed by 192.168.122.30 port 54382
Jan 20 19:08:03 compute-0 sshd-session[109485]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:08:03 compute-0 systemd[1]: session-37.scope: Deactivated successfully.
Jan 20 19:08:03 compute-0 systemd[1]: session-37.scope: Consumed 2.333s CPU time.
Jan 20 19:08:03 compute-0 systemd-logind[797]: Session 37 logged out. Waiting for processes to exit.
Jan 20 19:08:03 compute-0 systemd-logind[797]: Removed session 37.
Jan 20 19:08:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:03 compute-0 ceph-mon[75120]: 11.1a scrub starts
Jan 20 19:08:03 compute-0 ceph-mon[75120]: 11.1a scrub ok
Jan 20 19:08:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:04 compute-0 ceph-mon[75120]: 5.1d scrub starts
Jan 20 19:08:04 compute-0 ceph-mon[75120]: 5.1d scrub ok
Jan 20 19:08:04 compute-0 ceph-mon[75120]: pgmap v279: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 20 19:08:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 20 19:08:05 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 20 19:08:05 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 20 19:08:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:06 compute-0 ceph-mon[75120]: 10.13 scrub starts
Jan 20 19:08:06 compute-0 ceph-mon[75120]: 10.13 scrub ok
Jan 20 19:08:06 compute-0 ceph-mon[75120]: 11.6 scrub starts
Jan 20 19:08:06 compute-0 ceph-mon[75120]: 11.6 scrub ok
Jan 20 19:08:06 compute-0 ceph-mon[75120]: pgmap v280: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:08 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 20 19:08:08 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 20 19:08:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:08 compute-0 ceph-mon[75120]: pgmap v281: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:08 compute-0 ceph-mon[75120]: 11.f scrub starts
Jan 20 19:08:08 compute-0 ceph-mon[75120]: 11.f scrub ok
Jan 20 19:08:08 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 20 19:08:08 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 20 19:08:09 compute-0 sshd-session[110012]: Accepted publickey for zuul from 192.168.122.30 port 44272 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:08:09 compute-0 systemd-logind[797]: New session 38 of user zuul.
Jan 20 19:08:09 compute-0 systemd[1]: Started Session 38 of User zuul.
Jan 20 19:08:09 compute-0 sshd-session[110012]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:08:09 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 20 19:08:09 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 20 19:08:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:09 compute-0 ceph-mon[75120]: 4.4 scrub starts
Jan 20 19:08:09 compute-0 ceph-mon[75120]: 4.4 scrub ok
Jan 20 19:08:09 compute-0 ceph-mon[75120]: 7.3 scrub starts
Jan 20 19:08:09 compute-0 ceph-mon[75120]: 7.3 scrub ok
Jan 20 19:08:10 compute-0 python3.9[110165]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:08:10 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 20 19:08:10 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 20 19:08:10 compute-0 ceph-mon[75120]: pgmap v282: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:10 compute-0 ceph-mon[75120]: 7.6 scrub starts
Jan 20 19:08:10 compute-0 ceph-mon[75120]: 7.6 scrub ok
Jan 20 19:08:10 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 20 19:08:10 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 20 19:08:11 compute-0 python3.9[110319]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:08:11 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 20 19:08:11 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 20 19:08:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:11 compute-0 ceph-mon[75120]: 11.1c scrub starts
Jan 20 19:08:11 compute-0 ceph-mon[75120]: 11.1c scrub ok
Jan 20 19:08:11 compute-0 ceph-mon[75120]: 11.1 scrub starts
Jan 20 19:08:11 compute-0 ceph-mon[75120]: 11.1 scrub ok
Jan 20 19:08:11 compute-0 sudo[110473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bspdzlpjcsvuxtipgwkpdosmsgaejzjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936091.468658-35-1610680025752/AnsiballZ_setup.py'
Jan 20 19:08:11 compute-0 sudo[110473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:12 compute-0 python3.9[110475]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:08:12 compute-0 sudo[110473]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:12 compute-0 sudo[110557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlkkpmiullkgxthlymyfcvtkwpqofqzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936091.468658-35-1610680025752/AnsiballZ_dnf.py'
Jan 20 19:08:12 compute-0 sudo[110557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:12 compute-0 ceph-mon[75120]: pgmap v283: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:12 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 20 19:08:12 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 20 19:08:12 compute-0 python3.9[110559]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:08:13 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 20 19:08:13 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 20 19:08:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:13 compute-0 ceph-mon[75120]: 4.2 scrub starts
Jan 20 19:08:13 compute-0 ceph-mon[75120]: 4.2 scrub ok
Jan 20 19:08:13 compute-0 ceph-mon[75120]: 3.17 scrub starts
Jan 20 19:08:13 compute-0 ceph-mon[75120]: 3.17 scrub ok
Jan 20 19:08:13 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 20 19:08:13 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 20 19:08:14 compute-0 sudo[110557]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:14 compute-0 sudo[110710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-epdcvuamkmitrvsltvhcpifckhlnimch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936094.3329663-47-264316297449931/AnsiballZ_setup.py'
Jan 20 19:08:14 compute-0 sudo[110710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:14 compute-0 ceph-mon[75120]: pgmap v284: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:14 compute-0 ceph-mon[75120]: 4.13 scrub starts
Jan 20 19:08:14 compute-0 ceph-mon[75120]: 4.13 scrub ok
Jan 20 19:08:14 compute-0 python3.9[110712]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:08:15 compute-0 sudo[110710]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:15 compute-0 sudo[110905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-govpdelpanrilnfbeucdhtahybfhzupf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936095.3693862-58-38085634122457/AnsiballZ_file.py'
Jan 20 19:08:15 compute-0 sudo[110905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:15 compute-0 python3.9[110907]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:15 compute-0 sudo[110905]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:16 compute-0 sudo[111057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwwmoagvhwjvhgkxskqokczobasbwmpx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936096.136471-66-10354635222360/AnsiballZ_command.py'
Jan 20 19:08:16 compute-0 sudo[111057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:16 compute-0 python3.9[111059]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:08:16 compute-0 ceph-mon[75120]: pgmap v285: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:16 compute-0 sudo[111057]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:17 compute-0 sudo[111222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-effcdvpmugjmhocljiuldpcrzqitzgtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936096.962567-74-44282019892054/AnsiballZ_stat.py'
Jan 20 19:08:17 compute-0 sudo[111222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:17 compute-0 python3.9[111224]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:08:17 compute-0 sudo[111222]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:17 compute-0 sudo[111300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxznejanbivtnjudoiosguclroybbovq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936096.962567-74-44282019892054/AnsiballZ_file.py'
Jan 20 19:08:17 compute-0 sudo[111300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:17 compute-0 python3.9[111302]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:17 compute-0 sudo[111300]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:18 compute-0 sudo[111452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-klsgjpyestjaxovxxqypewjelqquqjwh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936098.141662-86-226319630056060/AnsiballZ_stat.py'
Jan 20 19:08:18 compute-0 sudo[111452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:18 compute-0 python3.9[111454]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:08:18 compute-0 sudo[111452]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:18 compute-0 sudo[111530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvvqmxdfvinujwcwyqrsdqgcafdxwcye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936098.141662-86-226319630056060/AnsiballZ_file.py'
Jan 20 19:08:18 compute-0 sudo[111530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:18 compute-0 ceph-mon[75120]: pgmap v286: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:18 compute-0 python3.9[111532]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:08:18 compute-0 sudo[111530]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:19 compute-0 sudo[111682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ummdcpgxmfjfqbnipsurxzwtvjoilall ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936099.1387022-99-238385700081168/AnsiballZ_ini_file.py'
Jan 20 19:08:19 compute-0 sudo[111682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:20 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 20 19:08:20 compute-0 ceph-mon[75120]: pgmap v287: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:20 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 20 19:08:20 compute-0 python3.9[111684]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:08:20 compute-0 sudo[111682]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:20 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 20 19:08:20 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 20 19:08:20 compute-0 sudo[111834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dybtkdsidszboravonguaqsfxydwmqgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936100.5905616-99-95603364718050/AnsiballZ_ini_file.py'
Jan 20 19:08:20 compute-0 sudo[111834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:21 compute-0 python3.9[111836]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:08:21 compute-0 sudo[111834]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:21 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 20 19:08:21 compute-0 ceph-mon[75120]: 4.f scrub starts
Jan 20 19:08:21 compute-0 ceph-mon[75120]: 4.f scrub ok
Jan 20 19:08:21 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 20 19:08:21 compute-0 sudo[111986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihllghledhhjgdxgggtfqzyshsmasbng ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936101.170799-99-259058013047485/AnsiballZ_ini_file.py'
Jan 20 19:08:21 compute-0 sudo[111986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:21 compute-0 python3.9[111988]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:08:21 compute-0 sudo[111986]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:21 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 20 19:08:21 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 20 19:08:21 compute-0 sudo[112138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vatbonupojnvrmbjdktxlovkilkqbchf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936101.737683-99-264163581460138/AnsiballZ_ini_file.py'
Jan 20 19:08:21 compute-0 sudo[112138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:22 compute-0 python3.9[112140]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:08:22 compute-0 sudo[112138]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:22 compute-0 ceph-mon[75120]: 11.1f scrub starts
Jan 20 19:08:22 compute-0 ceph-mon[75120]: 11.1f scrub ok
Jan 20 19:08:22 compute-0 ceph-mon[75120]: 4.d scrub starts
Jan 20 19:08:22 compute-0 ceph-mon[75120]: 4.d scrub ok
Jan 20 19:08:22 compute-0 ceph-mon[75120]: pgmap v288: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:22 compute-0 sudo[112290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osemrwsqnspnkuqjtyniykvwvpulidub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936102.4182096-130-160824397196120/AnsiballZ_dnf.py'
Jan 20 19:08:22 compute-0 sudo[112290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:22 compute-0 python3.9[112292]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:08:23 compute-0 ceph-mon[75120]: 3.16 scrub starts
Jan 20 19:08:23 compute-0 ceph-mon[75120]: 3.16 scrub ok
Jan 20 19:08:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:23 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 20 19:08:23 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 20 19:08:24 compute-0 sudo[112290]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 20 19:08:24 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 20 19:08:24 compute-0 ceph-mon[75120]: pgmap v289: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:24 compute-0 sudo[112443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vylxuhmicfbjihybptbpkotfiaoniqro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936104.5480146-141-90905055114593/AnsiballZ_setup.py'
Jan 20 19:08:24 compute-0 sudo[112443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:24 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 20 19:08:24 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 20 19:08:25 compute-0 python3.9[112445]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:08:25 compute-0 sudo[112443]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:25 compute-0 ceph-mon[75120]: 11.1e scrub starts
Jan 20 19:08:25 compute-0 ceph-mon[75120]: 11.1e scrub ok
Jan 20 19:08:25 compute-0 ceph-mon[75120]: 4.7 scrub starts
Jan 20 19:08:25 compute-0 ceph-mon[75120]: 4.7 scrub ok
Jan 20 19:08:25 compute-0 sudo[112597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syqjzonammonnowdtzeplszeuvsmxmfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936105.3263023-149-688167727744/AnsiballZ_stat.py'
Jan 20 19:08:25 compute-0 sudo[112597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:25 compute-0 python3.9[112599]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:08:25 compute-0 sudo[112597]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:26 compute-0 sudo[112749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbdfdjptdmnwtklafzhnppzhuozlqfim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936106.004705-158-28130120096412/AnsiballZ_stat.py'
Jan 20 19:08:26 compute-0 sudo[112749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:26 compute-0 ceph-mon[75120]: 11.1b scrub starts
Jan 20 19:08:26 compute-0 ceph-mon[75120]: 11.1b scrub ok
Jan 20 19:08:26 compute-0 ceph-mon[75120]: pgmap v290: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:26 compute-0 python3.9[112751]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:08:26 compute-0 sudo[112749]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:26 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 20 19:08:26 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 20 19:08:26 compute-0 sudo[112901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtccsxlydcolpouufyzgnzqcfqfuujzq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936106.683802-168-147968551177042/AnsiballZ_command.py'
Jan 20 19:08:26 compute-0 sudo[112901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:27 compute-0 python3.9[112903]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:08:27 compute-0 sudo[112901]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:27 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 20 19:08:27 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 20 19:08:27 compute-0 ceph-mon[75120]: 8.1c scrub starts
Jan 20 19:08:27 compute-0 ceph-mon[75120]: 8.1c scrub ok
Jan 20 19:08:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:27 compute-0 sudo[113054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iikvbgvmtwflhqeyrortbwlkekfkgupo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936107.3488111-178-187416652852618/AnsiballZ_service_facts.py'
Jan 20 19:08:27 compute-0 sudo[113054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:27 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 20 19:08:27 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 20 19:08:27 compute-0 python3.9[113056]: ansible-service_facts Invoked
Jan 20 19:08:27 compute-0 network[113073]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 19:08:27 compute-0 network[113074]: 'network-scripts' will be removed from distribution in near future.
Jan 20 19:08:27 compute-0 network[113075]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 19:08:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 20 19:08:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 20 19:08:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 20 19:08:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 20 19:08:28 compute-0 ceph-mon[75120]: 3.9 scrub starts
Jan 20 19:08:28 compute-0 ceph-mon[75120]: 3.9 scrub ok
Jan 20 19:08:28 compute-0 ceph-mon[75120]: pgmap v291: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:28 compute-0 ceph-mon[75120]: 4.11 scrub starts
Jan 20 19:08:28 compute-0 ceph-mon[75120]: 4.11 scrub ok
Jan 20 19:08:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:28 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 20 19:08:28 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 20 19:08:29 compute-0 ceph-mon[75120]: 4.9 scrub starts
Jan 20 19:08:29 compute-0 ceph-mon[75120]: 4.9 scrub ok
Jan 20 19:08:29 compute-0 ceph-mon[75120]: 7.f scrub starts
Jan 20 19:08:29 compute-0 ceph-mon[75120]: 7.f scrub ok
Jan 20 19:08:29 compute-0 ceph-mon[75120]: 7.1c scrub starts
Jan 20 19:08:29 compute-0 ceph-mon[75120]: 7.1c scrub ok
Jan 20 19:08:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:29 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 20 19:08:29 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 20 19:08:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 20 19:08:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 20 19:08:30 compute-0 ceph-mon[75120]: pgmap v292: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:30 compute-0 ceph-mon[75120]: 11.11 scrub starts
Jan 20 19:08:30 compute-0 ceph-mon[75120]: 11.11 scrub ok
Jan 20 19:08:30 compute-0 ceph-mon[75120]: 4.5 scrub starts
Jan 20 19:08:31 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 20 19:08:31 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 20 19:08:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:08:31
Jan 20 19:08:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:08:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:08:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.log', 'default.rgw.meta', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.control']
Jan 20 19:08:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:08:31 compute-0 ceph-mon[75120]: 4.5 scrub ok
Jan 20 19:08:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:31 compute-0 sudo[113054]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:32 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 20 19:08:32 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 20 19:08:32 compute-0 ceph-mon[75120]: 4.8 scrub starts
Jan 20 19:08:32 compute-0 ceph-mon[75120]: 4.8 scrub ok
Jan 20 19:08:32 compute-0 ceph-mon[75120]: pgmap v293: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:32 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 20 19:08:32 compute-0 sudo[113358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-graiucwwjhlzkbbwbroxdoggtlrdlpzs ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1768936112.4307714-193-99851030007/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1768936112.4307714-193-99851030007/args'
Jan 20 19:08:32 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 20 19:08:32 compute-0 sudo[113358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:32 compute-0 sudo[113358]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:33 compute-0 sudo[113525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcedupchgfcfseeyopjbgrefftyzbfnf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936113.111457-204-240816885728733/AnsiballZ_dnf.py'
Jan 20 19:08:33 compute-0 sudo[113525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:33 compute-0 ceph-mon[75120]: 7.13 scrub starts
Jan 20 19:08:33 compute-0 ceph-mon[75120]: 7.13 scrub ok
Jan 20 19:08:33 compute-0 ceph-mon[75120]: 11.18 scrub starts
Jan 20 19:08:33 compute-0 ceph-mon[75120]: 11.18 scrub ok
Jan 20 19:08:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:33 compute-0 python3.9[113527]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:08:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:33 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 20 19:08:33 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:08:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:08:34 compute-0 ceph-mon[75120]: pgmap v294: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:34 compute-0 ceph-mon[75120]: 8.12 scrub starts
Jan 20 19:08:34 compute-0 ceph-mon[75120]: 8.12 scrub ok
Jan 20 19:08:34 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 20 19:08:34 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 20 19:08:34 compute-0 sudo[113525]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:35 compute-0 ceph-mon[75120]: 6.8 scrub starts
Jan 20 19:08:35 compute-0 ceph-mon[75120]: 6.8 scrub ok
Jan 20 19:08:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:35 compute-0 sudo[113678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sonjwavyraavmsjcohdjgbzaxatltozf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936115.2215488-217-126630722576015/AnsiballZ_package_facts.py'
Jan 20 19:08:35 compute-0 sudo[113678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:36 compute-0 python3.9[113680]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 20 19:08:36 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 20 19:08:36 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 20 19:08:36 compute-0 sudo[113678]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:36 compute-0 ceph-mon[75120]: pgmap v295: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:36 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 20 19:08:36 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 20 19:08:37 compute-0 sudo[113830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-segabnawoxwaninzkgmykbkggqjxeesj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936116.8642433-227-532244908463/AnsiballZ_stat.py'
Jan 20 19:08:37 compute-0 sudo[113830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:37 compute-0 python3.9[113832]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:08:37 compute-0 sudo[113830]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:37 compute-0 sudo[113908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kypvhyecmfnbknwylalgepaawvsgousg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936116.8642433-227-532244908463/AnsiballZ_file.py'
Jan 20 19:08:37 compute-0 sudo[113908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:37 compute-0 ceph-mon[75120]: 3.a scrub starts
Jan 20 19:08:37 compute-0 ceph-mon[75120]: 3.a scrub ok
Jan 20 19:08:37 compute-0 ceph-mon[75120]: 6.f scrub starts
Jan 20 19:08:37 compute-0 ceph-mon[75120]: 6.f scrub ok
Jan 20 19:08:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:37 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 20 19:08:37 compute-0 python3.9[113910]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:37 compute-0 sudo[113908]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:37 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 20 19:08:38 compute-0 sudo[114060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnfecybjycxmoryyiynlrbpqdvyrabgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936118.070186-239-34430970484172/AnsiballZ_stat.py'
Jan 20 19:08:38 compute-0 sudo[114060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:38 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 20 19:08:38 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 20 19:08:38 compute-0 python3.9[114062]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:08:38 compute-0 sudo[114060]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:38 compute-0 ceph-mon[75120]: pgmap v296: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:38 compute-0 ceph-mon[75120]: 9.e scrub starts
Jan 20 19:08:38 compute-0 ceph-mon[75120]: 9.e scrub ok
Jan 20 19:08:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:38 compute-0 sudo[114138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iplthlocqguufscnkualwiktuotbfxkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936118.070186-239-34430970484172/AnsiballZ_file.py'
Jan 20 19:08:38 compute-0 sudo[114138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:39 compute-0 python3.9[114140]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:39 compute-0 sudo[114138]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:39 compute-0 ceph-mon[75120]: 8.c scrub starts
Jan 20 19:08:39 compute-0 ceph-mon[75120]: 8.c scrub ok
Jan 20 19:08:40 compute-0 sudo[114290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blvrbxdtbaosodpcclnktuliuuuxwwft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936119.5583546-257-77885313150944/AnsiballZ_lineinfile.py'
Jan 20 19:08:40 compute-0 sudo[114290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:40 compute-0 python3.9[114292]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:40 compute-0 sudo[114290]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:40 compute-0 ceph-mon[75120]: pgmap v297: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:40 compute-0 sudo[114442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rzhjsiiqzlzuqzxjmbikomqkgkrmzhrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936120.7213843-272-186797735201299/AnsiballZ_setup.py'
Jan 20 19:08:41 compute-0 sudo[114442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:41 compute-0 python3.9[114444]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:08:41 compute-0 sudo[114442]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:42 compute-0 sudo[114526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djiwxbboeieuyqokqcdtonbuazuagflw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936120.7213843-272-186797735201299/AnsiballZ_systemd.py'
Jan 20 19:08:42 compute-0 sudo[114526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:42 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 20 19:08:42 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 20 19:08:42 compute-0 python3.9[114528]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:08:42 compute-0 sudo[114526]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:42 compute-0 ceph-mon[75120]: pgmap v298: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:43 compute-0 sshd-session[110015]: Connection closed by 192.168.122.30 port 44272
Jan 20 19:08:43 compute-0 sshd-session[110012]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:08:43 compute-0 systemd-logind[797]: Session 38 logged out. Waiting for processes to exit.
Jan 20 19:08:43 compute-0 systemd[1]: session-38.scope: Deactivated successfully.
Jan 20 19:08:43 compute-0 systemd[1]: session-38.scope: Consumed 23.631s CPU time.
Jan 20 19:08:43 compute-0 systemd-logind[797]: Removed session 38.
Jan 20 19:08:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:43 compute-0 ceph-mon[75120]: 3.15 scrub starts
Jan 20 19:08:43 compute-0 ceph-mon[75120]: 3.15 scrub ok
Jan 20 19:08:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:44 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 20 19:08:44 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:08:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:08:44 compute-0 ceph-mon[75120]: pgmap v299: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:44 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 20 19:08:44 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 20 19:08:45 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 20 19:08:45 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 20 19:08:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:45 compute-0 ceph-mon[75120]: 8.1f scrub starts
Jan 20 19:08:45 compute-0 ceph-mon[75120]: 8.1f scrub ok
Jan 20 19:08:45 compute-0 ceph-mon[75120]: 9.8 scrub starts
Jan 20 19:08:45 compute-0 ceph-mon[75120]: 9.8 scrub ok
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.718306) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125718448, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7202, "num_deletes": 252, "total_data_size": 9917882, "memory_usage": 10108864, "flush_reason": "Manual Compaction"}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125771332, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7811563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 146, "largest_seqno": 7345, "table_properties": {"data_size": 7784868, "index_size": 17492, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8133, "raw_key_size": 74852, "raw_average_key_size": 23, "raw_value_size": 7722475, "raw_average_value_size": 2388, "num_data_blocks": 768, "num_entries": 3233, "num_filter_entries": 3233, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935727, "oldest_key_time": 1768935727, "file_creation_time": 1768936125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 53138 microseconds, and 15029 cpu microseconds.
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.771453) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7811563 bytes OK
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.771479) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.772898) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.772916) EVENT_LOG_v1 {"time_micros": 1768936125772911, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.772952) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9886596, prev total WAL file size 9886596, number of live WAL files 2.
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.776202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7628KB) 13(58KB) 8(1944B)]
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125776417, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7873467, "oldest_snapshot_seqno": -1}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3058 keys, 7826258 bytes, temperature: kUnknown
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125846712, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7826258, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7799999, "index_size": 17509, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 73255, "raw_average_key_size": 23, "raw_value_size": 7738995, "raw_average_value_size": 2530, "num_data_blocks": 770, "num_entries": 3058, "num_filter_entries": 3058, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936125, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.847102) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7826258 bytes
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.848648) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 111.8 rd, 111.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.5, 0.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3348, records dropped: 290 output_compression: NoCompression
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.848669) EVENT_LOG_v1 {"time_micros": 1768936125848657, "job": 4, "event": "compaction_finished", "compaction_time_micros": 70456, "compaction_time_cpu_micros": 34083, "output_level": 6, "num_output_files": 1, "total_output_size": 7826258, "num_input_records": 3348, "num_output_records": 3058, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125850362, "job": 4, "event": "table_file_deletion", "file_number": 19}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125850468, "job": 4, "event": "table_file_deletion", "file_number": 13}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936125850528, "job": 4, "event": "table_file_deletion", "file_number": 8}
Jan 20 19:08:45 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:08:45.775502) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:08:46 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 20 19:08:46 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 20 19:08:46 compute-0 ceph-mon[75120]: 8.1d scrub starts
Jan 20 19:08:46 compute-0 ceph-mon[75120]: 8.1d scrub ok
Jan 20 19:08:46 compute-0 ceph-mon[75120]: pgmap v300: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 20 19:08:47 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 20 19:08:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:47 compute-0 ceph-mon[75120]: 11.19 scrub starts
Jan 20 19:08:47 compute-0 ceph-mon[75120]: 11.19 scrub ok
Jan 20 19:08:47 compute-0 ceph-mon[75120]: 5.18 scrub starts
Jan 20 19:08:47 compute-0 ceph-mon[75120]: 5.18 scrub ok
Jan 20 19:08:48 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 20 19:08:48 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 20 19:08:48 compute-0 sshd-session[114556]: Accepted publickey for zuul from 192.168.122.30 port 41540 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:08:48 compute-0 systemd-logind[797]: New session 39 of user zuul.
Jan 20 19:08:48 compute-0 systemd[1]: Started Session 39 of User zuul.
Jan 20 19:08:48 compute-0 sshd-session[114556]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:08:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:48 compute-0 ceph-mon[75120]: pgmap v301: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:49 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 20 19:08:49 compute-0 sudo[114709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mysofrznevsitnrzwokaierlznbajtpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936128.80814-17-96997623271832/AnsiballZ_file.py'
Jan 20 19:08:49 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 20 19:08:49 compute-0 sudo[114709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:49 compute-0 python3.9[114711]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:49 compute-0 sudo[114709]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:49 compute-0 ceph-mon[75120]: 8.18 scrub starts
Jan 20 19:08:49 compute-0 ceph-mon[75120]: 8.18 scrub ok
Jan 20 19:08:49 compute-0 ceph-mon[75120]: 8.1a scrub starts
Jan 20 19:08:49 compute-0 ceph-mon[75120]: 8.1a scrub ok
Jan 20 19:08:50 compute-0 sudo[114861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcrmbaehtnlrytbhfttlltxeisnjwefn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936129.8538911-29-222962782921977/AnsiballZ_stat.py'
Jan 20 19:08:50 compute-0 sudo[114861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:50 compute-0 python3.9[114863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:08:50 compute-0 sudo[114861]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:50 compute-0 ceph-mon[75120]: pgmap v302: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:50 compute-0 sudo[114939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yshasmrijzsfswtelzitzvtfcvrcefzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936129.8538911-29-222962782921977/AnsiballZ_file.py'
Jan 20 19:08:50 compute-0 sudo[114939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:50 compute-0 python3.9[114941]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:50 compute-0 sudo[114939]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:51 compute-0 sshd-session[114559]: Connection closed by 192.168.122.30 port 41540
Jan 20 19:08:51 compute-0 sshd-session[114556]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:08:51 compute-0 systemd[1]: session-39.scope: Deactivated successfully.
Jan 20 19:08:51 compute-0 systemd[1]: session-39.scope: Consumed 1.703s CPU time.
Jan 20 19:08:51 compute-0 systemd-logind[797]: Session 39 logged out. Waiting for processes to exit.
Jan 20 19:08:51 compute-0 systemd-logind[797]: Removed session 39.
Jan 20 19:08:51 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 20 19:08:51 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 20 19:08:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:51 compute-0 ceph-mon[75120]: 8.14 scrub starts
Jan 20 19:08:51 compute-0 ceph-mon[75120]: 8.14 scrub ok
Jan 20 19:08:51 compute-0 sudo[114966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:51 compute-0 sudo[114966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:51 compute-0 sudo[114966]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:52 compute-0 sudo[114991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:08:52 compute-0 sudo[114991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:52 compute-0 sudo[114991]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:08:52 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:08:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:08:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:08:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:08:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:08:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:08:52 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:08:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:08:52 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:08:52 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:08:52 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:08:52 compute-0 sudo[115047]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:52 compute-0 sudo[115047]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:52 compute-0 sudo[115047]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:52 compute-0 sudo[115072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:08:52 compute-0 sudo[115072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:52 compute-0 ceph-mon[75120]: pgmap v303: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:08:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:08:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:08:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:08:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:08:52 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:08:52 compute-0 podman[115110]: 2026-01-20 19:08:52.908075282 +0000 UTC m=+0.035480471 container create 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:08:52 compute-0 systemd[1]: Started libpod-conmon-314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5.scope.
Jan 20 19:08:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:52 compute-0 podman[115110]: 2026-01-20 19:08:52.985445348 +0000 UTC m=+0.112850547 container init 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:08:52 compute-0 podman[115110]: 2026-01-20 19:08:52.892342741 +0000 UTC m=+0.019747950 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:08:52 compute-0 podman[115110]: 2026-01-20 19:08:52.991028356 +0000 UTC m=+0.118433545 container start 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:08:52 compute-0 podman[115110]: 2026-01-20 19:08:52.994203025 +0000 UTC m=+0.121608234 container attach 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:08:52 compute-0 compassionate_yonath[115124]: 167 167
Jan 20 19:08:52 compute-0 systemd[1]: libpod-314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5.scope: Deactivated successfully.
Jan 20 19:08:52 compute-0 conmon[115124]: conmon 314f597e5b010d0deca1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5.scope/container/memory.events
Jan 20 19:08:52 compute-0 podman[115110]: 2026-01-20 19:08:52.996897002 +0000 UTC m=+0.124302191 container died 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-99d491242c2f4198afaa7d7f496904d02e2d7a1a3a450eb6e65f37df4b54b321-merged.mount: Deactivated successfully.
Jan 20 19:08:53 compute-0 podman[115110]: 2026-01-20 19:08:53.042651865 +0000 UTC m=+0.170057064 container remove 314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_yonath, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:08:53 compute-0 systemd[1]: libpod-conmon-314f597e5b010d0deca1726afeb95252a130068e89b29a164758f7234ad122a5.scope: Deactivated successfully.
Jan 20 19:08:53 compute-0 podman[115150]: 2026-01-20 19:08:53.224850379 +0000 UTC m=+0.051081677 container create 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Jan 20 19:08:53 compute-0 systemd[1]: Started libpod-conmon-98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8.scope.
Jan 20 19:08:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:53 compute-0 podman[115150]: 2026-01-20 19:08:53.197951983 +0000 UTC m=+0.024183371 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:53 compute-0 podman[115150]: 2026-01-20 19:08:53.308497141 +0000 UTC m=+0.134728469 container init 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:08:53 compute-0 podman[115150]: 2026-01-20 19:08:53.314627603 +0000 UTC m=+0.140858901 container start 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 19:08:53 compute-0 podman[115150]: 2026-01-20 19:08:53.318446157 +0000 UTC m=+0.144677475 container attach 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:08:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:53 compute-0 romantic_hoover[115168]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:08:53 compute-0 romantic_hoover[115168]: --> All data devices are unavailable
Jan 20 19:08:53 compute-0 systemd[1]: libpod-98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8.scope: Deactivated successfully.
Jan 20 19:08:53 compute-0 podman[115150]: 2026-01-20 19:08:53.756513589 +0000 UTC m=+0.582744907 container died 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:08:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-b7bd1158e3086c032eb96cd2b51cb5e325025c49ab0db125a631ad71e224bb5b-merged.mount: Deactivated successfully.
Jan 20 19:08:53 compute-0 podman[115150]: 2026-01-20 19:08:53.797038842 +0000 UTC m=+0.623270140 container remove 98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_hoover, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 19:08:53 compute-0 systemd[1]: libpod-conmon-98d014e4f930f9fe3db42e92e4f5305e5be694c1270a7caaf1b6b3790d9c5fe8.scope: Deactivated successfully.
Jan 20 19:08:53 compute-0 sudo[115072]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:53 compute-0 sudo[115201]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:53 compute-0 sudo[115201]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:53 compute-0 sudo[115201]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:53 compute-0 sudo[115226]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:08:53 compute-0 sudo[115226]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:54 compute-0 podman[115265]: 2026-01-20 19:08:54.213276104 +0000 UTC m=+0.036396862 container create d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 19:08:54 compute-0 systemd[1]: Started libpod-conmon-d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353.scope.
Jan 20 19:08:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:54 compute-0 podman[115265]: 2026-01-20 19:08:54.282617111 +0000 UTC m=+0.105737889 container init d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True)
Jan 20 19:08:54 compute-0 podman[115265]: 2026-01-20 19:08:54.288179689 +0000 UTC m=+0.111300447 container start d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:08:54 compute-0 podman[115265]: 2026-01-20 19:08:54.291802259 +0000 UTC m=+0.114923047 container attach d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:08:54 compute-0 podman[115265]: 2026-01-20 19:08:54.197917384 +0000 UTC m=+0.021038162 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:08:54 compute-0 systemd[1]: libpod-d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353.scope: Deactivated successfully.
Jan 20 19:08:54 compute-0 elastic_ride[115281]: 167 167
Jan 20 19:08:54 compute-0 conmon[115281]: conmon d93735f77f9d0d9e6870 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353.scope/container/memory.events
Jan 20 19:08:54 compute-0 podman[115265]: 2026-01-20 19:08:54.29547951 +0000 UTC m=+0.118600278 container died d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:08:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-469a4a26e354903567fca2b579961077b7c44a60f8046641068720d8481781ab-merged.mount: Deactivated successfully.
Jan 20 19:08:54 compute-0 podman[115265]: 2026-01-20 19:08:54.337881451 +0000 UTC m=+0.161002209 container remove d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_ride, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:08:54 compute-0 systemd[1]: libpod-conmon-d93735f77f9d0d9e6870fe3803531b023a9f19e9a8e472021de812abffb61353.scope: Deactivated successfully.
Jan 20 19:08:54 compute-0 podman[115303]: 2026-01-20 19:08:54.521969521 +0000 UTC m=+0.045835637 container create 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 19:08:54 compute-0 systemd[1]: Started libpod-conmon-0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073.scope.
Jan 20 19:08:54 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:54 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:54 compute-0 podman[115303]: 2026-01-20 19:08:54.499017273 +0000 UTC m=+0.022883409 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:08:54 compute-0 podman[115303]: 2026-01-20 19:08:54.603732827 +0000 UTC m=+0.127598983 container init 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:08:54 compute-0 podman[115303]: 2026-01-20 19:08:54.61155698 +0000 UTC m=+0.135423096 container start 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:08:54 compute-0 podman[115303]: 2026-01-20 19:08:54.615928068 +0000 UTC m=+0.139794224 container attach 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:08:54 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 20 19:08:54 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 20 19:08:54 compute-0 ceph-mon[75120]: pgmap v304: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:54 compute-0 ceph-mon[75120]: 11.17 scrub starts
Jan 20 19:08:54 compute-0 ceph-mon[75120]: 11.17 scrub ok
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]: {
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:     "0": [
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:         {
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "devices": [
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "/dev/loop3"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             ],
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_name": "ceph_lv0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_size": "21470642176",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "name": "ceph_lv0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "tags": {
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cluster_name": "ceph",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.crush_device_class": "",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.encrypted": "0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.objectstore": "bluestore",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osd_id": "0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.type": "block",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.vdo": "0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.with_tpm": "0"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             },
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "type": "block",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "vg_name": "ceph_vg0"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:         }
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:     ],
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:     "1": [
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:         {
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "devices": [
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "/dev/loop4"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             ],
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_name": "ceph_lv1",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_size": "21470642176",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "name": "ceph_lv1",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "tags": {
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cluster_name": "ceph",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.crush_device_class": "",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.encrypted": "0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.objectstore": "bluestore",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osd_id": "1",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.type": "block",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.vdo": "0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.with_tpm": "0"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             },
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "type": "block",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "vg_name": "ceph_vg1"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:         }
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:     ],
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:     "2": [
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:         {
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "devices": [
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "/dev/loop5"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             ],
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_name": "ceph_lv2",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_size": "21470642176",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "name": "ceph_lv2",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "tags": {
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.cluster_name": "ceph",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.crush_device_class": "",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.encrypted": "0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.objectstore": "bluestore",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osd_id": "2",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.type": "block",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.vdo": "0",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:                 "ceph.with_tpm": "0"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             },
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "type": "block",
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:             "vg_name": "ceph_vg2"
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:         }
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]:     ]
Jan 20 19:08:54 compute-0 fervent_bhabha[115320]: }
Jan 20 19:08:54 compute-0 systemd[1]: libpod-0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073.scope: Deactivated successfully.
Jan 20 19:08:54 compute-0 podman[115303]: 2026-01-20 19:08:54.919243972 +0000 UTC m=+0.443110078 container died 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:08:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-14a0f42dba5bed70e8e069fefc130ed2dc0f7d7fa49b2bdc16fc69c60cd5dd38-merged.mount: Deactivated successfully.
Jan 20 19:08:54 compute-0 podman[115303]: 2026-01-20 19:08:54.961084919 +0000 UTC m=+0.484951025 container remove 0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=fervent_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:08:54 compute-0 systemd[1]: libpod-conmon-0cea3cb2fa6d79546ca8f15d13d5a815216501b7868ac77acac02e9e8b868073.scope: Deactivated successfully.
Jan 20 19:08:55 compute-0 sudo[115226]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:55 compute-0 sudo[115341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:08:55 compute-0 sudo[115341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:55 compute-0 sudo[115341]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:55 compute-0 sudo[115366]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:08:55 compute-0 sudo[115366]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 20 19:08:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 20 19:08:55 compute-0 podman[115403]: 2026-01-20 19:08:55.410734308 +0000 UTC m=+0.044553235 container create d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:08:55 compute-0 systemd[1]: Started libpod-conmon-d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614.scope.
Jan 20 19:08:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:55 compute-0 podman[115403]: 2026-01-20 19:08:55.394193448 +0000 UTC m=+0.028012385 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:08:55 compute-0 podman[115403]: 2026-01-20 19:08:55.490956275 +0000 UTC m=+0.124775212 container init d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:08:55 compute-0 podman[115403]: 2026-01-20 19:08:55.498115902 +0000 UTC m=+0.131934829 container start d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:08:55 compute-0 podman[115403]: 2026-01-20 19:08:55.502122141 +0000 UTC m=+0.135941058 container attach d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:08:55 compute-0 trusting_lederberg[115419]: 167 167
Jan 20 19:08:55 compute-0 systemd[1]: libpod-d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614.scope: Deactivated successfully.
Jan 20 19:08:55 compute-0 podman[115403]: 2026-01-20 19:08:55.50568101 +0000 UTC m=+0.139499927 container died d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:08:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-c634c716127eff9cf60533083beab32003e71d56c1fe9525e5c4e5ff09126b0c-merged.mount: Deactivated successfully.
Jan 20 19:08:55 compute-0 podman[115403]: 2026-01-20 19:08:55.546675885 +0000 UTC m=+0.180494792 container remove d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=trusting_lederberg, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:08:55 compute-0 systemd[1]: libpod-conmon-d054ad2de984bb5ed03975e83eb9080fa8e8c349d83ec4d1c175e8036ba13614.scope: Deactivated successfully.
Jan 20 19:08:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:55 compute-0 podman[115444]: 2026-01-20 19:08:55.699202384 +0000 UTC m=+0.044801371 container create 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 19:08:55 compute-0 systemd[1]: Started libpod-conmon-3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412.scope.
Jan 20 19:08:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:55 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:08:55 compute-0 ceph-mon[75120]: 4.14 scrub starts
Jan 20 19:08:55 compute-0 ceph-mon[75120]: 4.14 scrub ok
Jan 20 19:08:55 compute-0 podman[115444]: 2026-01-20 19:08:55.777815961 +0000 UTC m=+0.123414968 container init 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:08:55 compute-0 podman[115444]: 2026-01-20 19:08:55.68293574 +0000 UTC m=+0.028534747 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:08:55 compute-0 podman[115444]: 2026-01-20 19:08:55.785920942 +0000 UTC m=+0.131519929 container start 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:08:55 compute-0 podman[115444]: 2026-01-20 19:08:55.790110266 +0000 UTC m=+0.135709283 container attach 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:08:55 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 20 19:08:55 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 20 19:08:56 compute-0 lvm[115539]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:08:56 compute-0 lvm[115542]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:08:56 compute-0 lvm[115542]: VG ceph_vg1 finished
Jan 20 19:08:56 compute-0 lvm[115539]: VG ceph_vg0 finished
Jan 20 19:08:56 compute-0 lvm[115544]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:08:56 compute-0 lvm[115544]: VG ceph_vg2 finished
Jan 20 19:08:56 compute-0 sshd-session[115532]: Accepted publickey for zuul from 192.168.122.30 port 41544 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:08:56 compute-0 lvm[115546]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:08:56 compute-0 lvm[115546]: VG ceph_vg1 finished
Jan 20 19:08:56 compute-0 systemd-logind[797]: New session 40 of user zuul.
Jan 20 19:08:56 compute-0 systemd[1]: Started Session 40 of User zuul.
Jan 20 19:08:56 compute-0 sshd-session[115532]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:08:56 compute-0 youthful_kirch[115460]: {}
Jan 20 19:08:56 compute-0 systemd[1]: libpod-3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412.scope: Deactivated successfully.
Jan 20 19:08:56 compute-0 systemd[1]: libpod-3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412.scope: Consumed 1.412s CPU time.
Jan 20 19:08:56 compute-0 podman[115444]: 2026-01-20 19:08:56.646243734 +0000 UTC m=+0.991842741 container died 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:08:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-981acd86554aa7b7893992c63650d676ab974d6c38d47592aa4a1b7fcc35219d-merged.mount: Deactivated successfully.
Jan 20 19:08:56 compute-0 podman[115444]: 2026-01-20 19:08:56.700726523 +0000 UTC m=+1.046325500 container remove 3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_kirch, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:08:56 compute-0 systemd[1]: libpod-conmon-3602dd838f301d5f2b4107139ab2b6016e4206c30bb375afaca5f22cfe74d412.scope: Deactivated successfully.
Jan 20 19:08:56 compute-0 sudo[115366]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:56 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:08:56 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:08:56 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:08:56 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:08:56 compute-0 ceph-mon[75120]: pgmap v305: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:56 compute-0 ceph-mon[75120]: 9.17 scrub starts
Jan 20 19:08:56 compute-0 ceph-mon[75120]: 9.17 scrub ok
Jan 20 19:08:56 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:08:56 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:08:56 compute-0 sudo[115616]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:08:56 compute-0 sudo[115616]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:08:56 compute-0 sudo[115616]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:57 compute-0 python3.9[115738]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:08:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 20 19:08:58 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 20 19:08:58 compute-0 sudo[115892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwgmkqoushizpphgbwfcvuigtdrnshck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936137.9978685-28-33313408992549/AnsiballZ_file.py'
Jan 20 19:08:58 compute-0 sudo[115892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:58 compute-0 python3.9[115894]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:58 compute-0 sudo[115892]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:58 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 20 19:08:58 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 20 19:08:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:08:58 compute-0 ceph-mon[75120]: pgmap v306: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:58 compute-0 ceph-mon[75120]: 4.12 scrub starts
Jan 20 19:08:58 compute-0 ceph-mon[75120]: 4.12 scrub ok
Jan 20 19:08:58 compute-0 ceph-mon[75120]: 7.1b scrub starts
Jan 20 19:08:58 compute-0 ceph-mon[75120]: 7.1b scrub ok
Jan 20 19:08:59 compute-0 sudo[116067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmkrerkmbwqdrucwmaqahddtcuyzfzoe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936138.734793-36-256792590723402/AnsiballZ_stat.py'
Jan 20 19:08:59 compute-0 sudo[116067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:59 compute-0 python3.9[116069]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:08:59 compute-0 sudo[116067]: pam_unix(sudo:session): session closed for user root
Jan 20 19:08:59 compute-0 sudo[116145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgausazrlthpialooxanydvqjpsmwnka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936138.734793-36-256792590723402/AnsiballZ_file.py'
Jan 20 19:08:59 compute-0 sudo[116145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:08:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:08:59 compute-0 python3.9[116147]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.5ru5ie4z recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:08:59 compute-0 sudo[116145]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:00 compute-0 sudo[116297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqacmekheflxbmnbfwglixkgtzcpzvgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936140.0718956-56-145556682605643/AnsiballZ_stat.py'
Jan 20 19:09:00 compute-0 sudo[116297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:00 compute-0 python3.9[116299]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:00 compute-0 sudo[116297]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:00 compute-0 sudo[116375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snpczzorzukweugkuchmgvjkkidnjqxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936140.0718956-56-145556682605643/AnsiballZ_file.py'
Jan 20 19:09:00 compute-0 sudo[116375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:00 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 20 19:09:00 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 20 19:09:00 compute-0 ceph-mon[75120]: pgmap v307: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:00 compute-0 python3.9[116377]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.5ghv7o77 recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:00 compute-0 sudo[116375]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:01 compute-0 sudo[116527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlcxcaopcdalvxqrquacaqljjnwgrozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936141.1056406-69-19329422630850/AnsiballZ_file.py'
Jan 20 19:09:01 compute-0 sudo[116527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:01 compute-0 python3.9[116529]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:09:01 compute-0 sudo[116527]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:01 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 20 19:09:01 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 20 19:09:01 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 20 19:09:01 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 20 19:09:01 compute-0 ceph-mon[75120]: 9.f scrub starts
Jan 20 19:09:01 compute-0 ceph-mon[75120]: 9.f scrub ok
Jan 20 19:09:01 compute-0 ceph-mon[75120]: 3.12 scrub starts
Jan 20 19:09:01 compute-0 ceph-mon[75120]: 3.12 scrub ok
Jan 20 19:09:02 compute-0 sudo[116679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hffdtnsndvvupxndcwedlbzemrjjibjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936141.8634205-77-176002647231415/AnsiballZ_stat.py'
Jan 20 19:09:02 compute-0 sudo[116679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:02 compute-0 python3.9[116681]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:02 compute-0 sudo[116679]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:02 compute-0 sudo[116757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zylqbcuphnbkndycxbgcjdmkuxyravtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936141.8634205-77-176002647231415/AnsiballZ_file.py'
Jan 20 19:09:02 compute-0 sudo[116757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:02 compute-0 python3.9[116759]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:09:02 compute-0 sudo[116757]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:02 compute-0 ceph-mon[75120]: pgmap v308: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:02 compute-0 ceph-mon[75120]: 9.c scrub starts
Jan 20 19:09:02 compute-0 ceph-mon[75120]: 9.c scrub ok
Jan 20 19:09:03 compute-0 sudo[116909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bnwhktvirbtanuzfajmabnvugassuqii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936142.8742085-77-258756450958450/AnsiballZ_stat.py'
Jan 20 19:09:03 compute-0 sudo[116909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:03 compute-0 python3.9[116911]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:03 compute-0 sudo[116909]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:03 compute-0 sudo[116987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phnpwclqoyiwfvjiasfyyeegjnwzjrmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936142.8742085-77-258756450958450/AnsiballZ_file.py'
Jan 20 19:09:03 compute-0 sudo[116987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:03 compute-0 python3.9[116989]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:09:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:03 compute-0 sudo[116987]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:03 compute-0 ceph-mon[75120]: pgmap v309: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:04 compute-0 sudo[117139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmmofiwvuodhcdtzecsrzbqjdalvnqxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936143.8707523-100-33345374002058/AnsiballZ_file.py'
Jan 20 19:09:04 compute-0 sudo[117139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 20 19:09:04 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 20 19:09:04 compute-0 python3.9[117141]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:04 compute-0 sudo[117139]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:04 compute-0 sudo[117291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xingpnuzpwbusqduhdnpewyqucrhtywg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936144.4765024-108-237906509529086/AnsiballZ_stat.py'
Jan 20 19:09:04 compute-0 sudo[117291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:04 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 20 19:09:04 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 20 19:09:04 compute-0 ceph-mon[75120]: 4.10 scrub starts
Jan 20 19:09:04 compute-0 ceph-mon[75120]: 4.10 scrub ok
Jan 20 19:09:04 compute-0 python3.9[117293]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:04 compute-0 sudo[117291]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:05 compute-0 sudo[117369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kieupncgeipksczlirdlngxptdvybcos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936144.4765024-108-237906509529086/AnsiballZ_file.py'
Jan 20 19:09:05 compute-0 sudo[117369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:05 compute-0 python3.9[117371]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:05 compute-0 sudo[117369]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:05 compute-0 sudo[117521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asiwhhxchlcxgxeehzyhrwdtkyaroafp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936145.4879093-120-187895668697577/AnsiballZ_stat.py'
Jan 20 19:09:05 compute-0 sudo[117521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:05 compute-0 ceph-mon[75120]: 9.7 scrub starts
Jan 20 19:09:05 compute-0 ceph-mon[75120]: 9.7 scrub ok
Jan 20 19:09:05 compute-0 ceph-mon[75120]: pgmap v310: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:05 compute-0 python3.9[117523]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:05 compute-0 sudo[117521]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:06 compute-0 sudo[117599]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmfdhkwspugslrpwppwofqgpvfmgqbwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936145.4879093-120-187895668697577/AnsiballZ_file.py'
Jan 20 19:09:06 compute-0 sudo[117599]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:06 compute-0 python3.9[117601]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:06 compute-0 sudo[117599]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:06 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 20 19:09:06 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 20 19:09:07 compute-0 sudo[117751]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efwpxvmzrvfzxkamussohjcrfdpowvcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936146.5464683-132-94220085649095/AnsiballZ_systemd.py'
Jan 20 19:09:07 compute-0 sudo[117751]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:07 compute-0 python3.9[117753]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:09:07 compute-0 systemd[1]: Reloading.
Jan 20 19:09:07 compute-0 systemd-sysv-generator[117778]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:09:07 compute-0 systemd-rc-local-generator[117772]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:09:07 compute-0 ceph-mon[75120]: 9.6 scrub starts
Jan 20 19:09:07 compute-0 ceph-mon[75120]: 9.6 scrub ok
Jan 20 19:09:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:07 compute-0 sudo[117751]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:08 compute-0 sudo[117940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqggbdfsihcasbjcikqrxeoqqqupohym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936147.9352608-140-194799801219680/AnsiballZ_stat.py'
Jan 20 19:09:08 compute-0 sudo[117940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:08 compute-0 python3.9[117942]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:08 compute-0 sudo[117940]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:08 compute-0 ceph-mon[75120]: pgmap v311: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:08 compute-0 sudo[118018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anxdvzrjwpwkkkelwymvhuijdhqveayg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936147.9352608-140-194799801219680/AnsiballZ_file.py'
Jan 20 19:09:08 compute-0 sudo[118018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:08 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 20 19:09:08 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 20 19:09:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:08 compute-0 python3.9[118020]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:08 compute-0 sudo[118018]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:08 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 20 19:09:08 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 20 19:09:09 compute-0 sudo[118170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwtlumrlpgfeocezujpgenjrexgmmewk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936148.9380002-152-117646147870774/AnsiballZ_stat.py'
Jan 20 19:09:09 compute-0 sudo[118170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:09 compute-0 python3.9[118172]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:09 compute-0 sudo[118170]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:09 compute-0 ceph-mon[75120]: 10.e scrub starts
Jan 20 19:09:09 compute-0 ceph-mon[75120]: 10.e scrub ok
Jan 20 19:09:09 compute-0 ceph-mon[75120]: 9.19 scrub starts
Jan 20 19:09:09 compute-0 ceph-mon[75120]: 9.19 scrub ok
Jan 20 19:09:09 compute-0 sudo[118248]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tnvxptcixkckmlzdwfnxpijkvfjnqfqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936148.9380002-152-117646147870774/AnsiballZ_file.py'
Jan 20 19:09:09 compute-0 sudo[118248]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:09 compute-0 python3.9[118250]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:09 compute-0 sudo[118248]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:10 compute-0 sudo[118400]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fkkdamxbexarrqkvzntmhmbzjblawjya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936149.9777532-164-46022865369487/AnsiballZ_systemd.py'
Jan 20 19:09:10 compute-0 sudo[118400]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:10 compute-0 python3.9[118402]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:09:10 compute-0 systemd[1]: Reloading.
Jan 20 19:09:10 compute-0 ceph-mon[75120]: pgmap v312: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:10 compute-0 systemd-rc-local-generator[118427]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:09:10 compute-0 systemd-sysv-generator[118430]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:09:10 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 20 19:09:10 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 20 19:09:10 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 19:09:10 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 19:09:10 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 19:09:10 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 19:09:10 compute-0 sudo[118400]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:11 compute-0 ceph-mon[75120]: 10.d scrub starts
Jan 20 19:09:11 compute-0 ceph-mon[75120]: 10.d scrub ok
Jan 20 19:09:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:11 compute-0 python3.9[118593]: ansible-ansible.builtin.service_facts Invoked
Jan 20 19:09:11 compute-0 network[118610]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 19:09:11 compute-0 network[118611]: 'network-scripts' will be removed from distribution in near future.
Jan 20 19:09:11 compute-0 network[118612]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 19:09:12 compute-0 ceph-mon[75120]: pgmap v313: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:13 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 20 19:09:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:13 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 20 19:09:14 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 20 19:09:14 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 20 19:09:14 compute-0 ceph-mon[75120]: pgmap v314: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:14 compute-0 ceph-mon[75120]: 9.18 scrub starts
Jan 20 19:09:14 compute-0 ceph-mon[75120]: 9.18 scrub ok
Jan 20 19:09:14 compute-0 ceph-mon[75120]: 10.14 scrub starts
Jan 20 19:09:14 compute-0 ceph-mon[75120]: 10.14 scrub ok
Jan 20 19:09:14 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 20 19:09:14 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 20 19:09:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:15 compute-0 ceph-mon[75120]: 9.13 scrub starts
Jan 20 19:09:15 compute-0 ceph-mon[75120]: 9.13 scrub ok
Jan 20 19:09:16 compute-0 sudo[118872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkdqxqpzbnmtzqskjfsqbgvnsdwrcowp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936155.854253-190-4052741747667/AnsiballZ_stat.py'
Jan 20 19:09:16 compute-0 sudo[118872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:16 compute-0 python3.9[118874]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:16 compute-0 sudo[118872]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:16 compute-0 sudo[118950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orevhtuvokxqlvkspokxpztzhigdoiek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936155.854253-190-4052741747667/AnsiballZ_file.py'
Jan 20 19:09:16 compute-0 sudo[118950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:16 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 20 19:09:16 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 20 19:09:16 compute-0 ceph-mon[75120]: pgmap v315: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:16 compute-0 python3.9[118952]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:16 compute-0 sudo[118950]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:17 compute-0 sudo[119102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdouexltmxcfwfnpciossofwqddjnbta ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936157.049027-203-55167773359605/AnsiballZ_file.py'
Jan 20 19:09:17 compute-0 sudo[119102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:17 compute-0 python3.9[119104]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:17 compute-0 sudo[119102]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:17 compute-0 ceph-mon[75120]: 10.15 scrub starts
Jan 20 19:09:17 compute-0 ceph-mon[75120]: 10.15 scrub ok
Jan 20 19:09:17 compute-0 sudo[119254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spddnhlaeqacqpzgoqplqjeaatdihebz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936157.6592226-211-176045290404733/AnsiballZ_stat.py'
Jan 20 19:09:17 compute-0 sudo[119254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:18 compute-0 python3.9[119256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:18 compute-0 sudo[119254]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:18 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 20 19:09:18 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 20 19:09:18 compute-0 sudo[119332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-camuqsatmmsdkavczivpjwmklzprcwna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936157.6592226-211-176045290404733/AnsiballZ_file.py'
Jan 20 19:09:18 compute-0 sudo[119332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:18 compute-0 python3.9[119334]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:18 compute-0 sudo[119332]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:18 compute-0 ceph-mon[75120]: pgmap v316: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:18 compute-0 ceph-mon[75120]: 10.12 scrub starts
Jan 20 19:09:18 compute-0 ceph-mon[75120]: 10.12 scrub ok
Jan 20 19:09:19 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 20 19:09:19 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 20 19:09:19 compute-0 sudo[119484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmzevnllsswkwjlebgnlzkcpeoahukgb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936158.7645268-226-277816523758525/AnsiballZ_timezone.py'
Jan 20 19:09:19 compute-0 sudo[119484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:19 compute-0 python3.9[119486]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 20 19:09:19 compute-0 systemd[1]: Starting Time & Date Service...
Jan 20 19:09:19 compute-0 systemd[1]: Started Time & Date Service.
Jan 20 19:09:19 compute-0 sudo[119484]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:19 compute-0 ceph-mon[75120]: 6.2 scrub starts
Jan 20 19:09:19 compute-0 ceph-mon[75120]: 6.2 scrub ok
Jan 20 19:09:19 compute-0 sudo[119640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trejzyxlcrnxnhneakrfwrfkchticyyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936159.7374163-235-122074600258679/AnsiballZ_file.py'
Jan 20 19:09:19 compute-0 sudo[119640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:20 compute-0 python3.9[119642]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:20 compute-0 sudo[119640]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:20 compute-0 sudo[119792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhbszqktflriuxgdvazxrfazebbohuoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936160.3352005-243-185504756518854/AnsiballZ_stat.py'
Jan 20 19:09:20 compute-0 sudo[119792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:20 compute-0 python3.9[119794]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:20 compute-0 ceph-mon[75120]: pgmap v317: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:20 compute-0 sudo[119792]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:20 compute-0 sudo[119870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdbgpfxkrcqytykmevxmynockbxqzedu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936160.3352005-243-185504756518854/AnsiballZ_file.py'
Jan 20 19:09:20 compute-0 sudo[119870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:21 compute-0 python3.9[119872]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:21 compute-0 sudo[119870]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:21 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 20 19:09:21 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 20 19:09:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:22 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 20 19:09:22 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 20 19:09:22 compute-0 ceph-mon[75120]: 6.6 scrub starts
Jan 20 19:09:22 compute-0 ceph-mon[75120]: 6.6 scrub ok
Jan 20 19:09:22 compute-0 ceph-mon[75120]: pgmap v318: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:22 compute-0 sudo[120022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oignhgfmuedxapqiwwwtmgmjzchyajfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936161.3097155-255-145951229924982/AnsiballZ_stat.py'
Jan 20 19:09:22 compute-0 sudo[120022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:22 compute-0 python3.9[120024]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:22 compute-0 sudo[120022]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:22 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 20 19:09:22 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 20 19:09:22 compute-0 sudo[120100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbcokkeiokgkbpikovdetutbnhvtsprn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936161.3097155-255-145951229924982/AnsiballZ_file.py'
Jan 20 19:09:22 compute-0 sudo[120100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:22 compute-0 python3.9[120102]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.4gqboswt recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:22 compute-0 sudo[120100]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:23 compute-0 ceph-mon[75120]: 6.4 scrub starts
Jan 20 19:09:23 compute-0 ceph-mon[75120]: 6.4 scrub ok
Jan 20 19:09:23 compute-0 ceph-mon[75120]: 8.6 scrub starts
Jan 20 19:09:23 compute-0 ceph-mon[75120]: 8.6 scrub ok
Jan 20 19:09:23 compute-0 sudo[120252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzspkgwweagjglbjfvzepxhpzgviwkgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936163.1125607-267-238379394296684/AnsiballZ_stat.py'
Jan 20 19:09:23 compute-0 sudo[120252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:23 compute-0 python3.9[120254]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:23 compute-0 sudo[120252]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:23 compute-0 sudo[120330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xujqbltiztmwjedcczqispazfqydmaix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936163.1125607-267-238379394296684/AnsiballZ_file.py'
Jan 20 19:09:23 compute-0 sudo[120330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:23 compute-0 python3.9[120332]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:23 compute-0 sudo[120330]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:24 compute-0 ceph-mon[75120]: pgmap v319: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:24 compute-0 sudo[120482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkohuuggrjuycqhojungqnvrajvdrniu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936164.1348448-280-108557535669545/AnsiballZ_command.py'
Jan 20 19:09:24 compute-0 sudo[120482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:24 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 20 19:09:24 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 20 19:09:24 compute-0 python3.9[120484]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:09:24 compute-0 sudo[120482]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:25 compute-0 ceph-mon[75120]: 10.9 scrub starts
Jan 20 19:09:25 compute-0 ceph-mon[75120]: 10.9 scrub ok
Jan 20 19:09:25 compute-0 sudo[120635]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnxbftsdtozsfdhreaowembmayczaduq ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768936164.9261386-288-126949960505753/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 19:09:25 compute-0 sudo[120635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:25 compute-0 python3[120637]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 19:09:25 compute-0 sudo[120635]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:25 compute-0 sudo[120787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zsvxnfwojmnnysguxnljofolxvpmfnlv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936165.6710742-296-208604909949069/AnsiballZ_stat.py'
Jan 20 19:09:25 compute-0 sudo[120787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:26 compute-0 python3.9[120789]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:26 compute-0 sudo[120787]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:26 compute-0 ceph-mon[75120]: pgmap v320: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:26 compute-0 sudo[120865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjgecqbbqgpqjgjlhakaqlfhgsmrhdok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936165.6710742-296-208604909949069/AnsiballZ_file.py'
Jan 20 19:09:26 compute-0 sudo[120865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:26 compute-0 python3.9[120867]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:26 compute-0 sudo[120865]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:26 compute-0 sudo[121017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzocaeimpbenjnmmklghpripdwmjutws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936166.659998-308-272978667455818/AnsiballZ_stat.py'
Jan 20 19:09:26 compute-0 sudo[121017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:27 compute-0 python3.9[121019]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:27 compute-0 sudo[121017]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:27 compute-0 sudo[121142]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiefzxzpoxufcfxootzugkuqclqkdpwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936166.659998-308-272978667455818/AnsiballZ_copy.py'
Jan 20 19:09:27 compute-0 sudo[121142]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:27 compute-0 python3.9[121144]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936166.659998-308-272978667455818/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:27 compute-0 sudo[121142]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 20 19:09:28 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 20 19:09:28 compute-0 sudo[121294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvyfjkntgxqmqqwqfpjcarfcvqlwrhsv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936167.99538-323-64543937077640/AnsiballZ_stat.py'
Jan 20 19:09:28 compute-0 sudo[121294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:28 compute-0 python3.9[121296]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:28 compute-0 sudo[121294]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 20 19:09:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 20 19:09:28 compute-0 sudo[121372]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djspqmkohozreytyozdmzmdvwwnoqqkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936167.99538-323-64543937077640/AnsiballZ_file.py'
Jan 20 19:09:28 compute-0 sudo[121372]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:28 compute-0 ceph-mon[75120]: pgmap v321: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:28 compute-0 ceph-mon[75120]: 6.d scrub starts
Jan 20 19:09:28 compute-0 ceph-mon[75120]: 6.d scrub ok
Jan 20 19:09:28 compute-0 python3.9[121374]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:28 compute-0 sudo[121372]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:29 compute-0 sudo[121525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iealnnayvlesvttwxoxvhivnnixrqbke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936169.079784-335-222366226594298/AnsiballZ_stat.py'
Jan 20 19:09:29 compute-0 sudo[121525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:29 compute-0 python3.9[121527]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:29 compute-0 sudo[121525]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:29 compute-0 ceph-mon[75120]: 8.f scrub starts
Jan 20 19:09:29 compute-0 ceph-mon[75120]: 8.f scrub ok
Jan 20 19:09:29 compute-0 sudo[121603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vklqtihftrtqqolotqlazejdnrnismmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936169.079784-335-222366226594298/AnsiballZ_file.py'
Jan 20 19:09:29 compute-0 sudo[121603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:29 compute-0 python3.9[121605]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:30 compute-0 sudo[121603]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 20 19:09:30 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 20 19:09:30 compute-0 sudo[121755]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcorrmgjodvlkvlgedizznwmfpesmhnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936170.131817-347-143280726945951/AnsiballZ_stat.py'
Jan 20 19:09:30 compute-0 sudo[121755]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:30 compute-0 python3.9[121757]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:30 compute-0 sudo[121755]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:30 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 20 19:09:30 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 20 19:09:30 compute-0 ceph-mon[75120]: pgmap v322: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:30 compute-0 ceph-mon[75120]: 6.e scrub starts
Jan 20 19:09:30 compute-0 ceph-mon[75120]: 6.e scrub ok
Jan 20 19:09:30 compute-0 sudo[121833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnfvztnfabndbvexfvkxcwpvlklpweis ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936170.131817-347-143280726945951/AnsiballZ_file.py'
Jan 20 19:09:30 compute-0 sudo[121833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:31 compute-0 python3.9[121835]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:31 compute-0 sudo[121833]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:09:31
Jan 20 19:09:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:09:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:09:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['images', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'backups', 'default.rgw.control', 'volumes']
Jan 20 19:09:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:09:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:31 compute-0 sudo[121985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zptulekkcclsayxkasgswgdslqrwqilt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936171.3173354-360-73223025199084/AnsiballZ_command.py'
Jan 20 19:09:31 compute-0 sudo[121985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:31 compute-0 ceph-mon[75120]: 6.a scrub starts
Jan 20 19:09:31 compute-0 ceph-mon[75120]: 6.a scrub ok
Jan 20 19:09:31 compute-0 python3.9[121987]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:09:31 compute-0 sudo[121985]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:32 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 20 19:09:32 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 20 19:09:32 compute-0 sudo[122140]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnwvcibulklazgfyfwykwokeneuyoupq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936172.1261625-368-101811588193428/AnsiballZ_blockinfile.py'
Jan 20 19:09:32 compute-0 sudo[122140]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:32 compute-0 ceph-mon[75120]: pgmap v323: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:32 compute-0 ceph-mon[75120]: 6.1 scrub starts
Jan 20 19:09:32 compute-0 ceph-mon[75120]: 6.1 scrub ok
Jan 20 19:09:32 compute-0 python3.9[122142]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:32 compute-0 sudo[122140]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:33 compute-0 sudo[122292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkvwyydjeqvvarpuaklzwteulbetnutk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936172.9651225-377-71029839134250/AnsiballZ_file.py'
Jan 20 19:09:33 compute-0 sudo[122292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:33 compute-0 python3.9[122294]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:33 compute-0 sudo[122292]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:33 compute-0 sudo[122444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aszicdexdlnlmmxtshrhxznejwwdjqqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936173.5530257-377-141445953744357/AnsiballZ_file.py'
Jan 20 19:09:33 compute-0 sudo[122444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:34 compute-0 python3.9[122446]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:34 compute-0 sudo[122444]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 20 19:09:34 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:09:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:09:34 compute-0 sudo[122598]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpkkkxamnxtqbzinmgtbfbacnqcoaqhb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936174.2130296-392-117714563172352/AnsiballZ_mount.py'
Jan 20 19:09:34 compute-0 sudo[122598]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:34 compute-0 sshd-session[122492]: Connection closed by authenticating user root 45.148.10.240 port 44174 [preauth]
Jan 20 19:09:34 compute-0 ceph-mon[75120]: pgmap v324: 305 pgs: 305 active+clean; 463 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:34 compute-0 ceph-mon[75120]: 6.c scrub starts
Jan 20 19:09:34 compute-0 ceph-mon[75120]: 6.c scrub ok
Jan 20 19:09:34 compute-0 python3.9[122600]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 19:09:34 compute-0 sudo[122598]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:35 compute-0 sudo[122750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uydocvdfckbjzqbcpbfxohxepotjbjly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936175.0207849-392-223717912239257/AnsiballZ_mount.py'
Jan 20 19:09:35 compute-0 sudo[122750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:35 compute-0 python3.9[122752]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 20 19:09:35 compute-0 sudo[122750]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:35 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 20 19:09:35 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 20 19:09:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:35 compute-0 sshd-session[115548]: Connection closed by 192.168.122.30 port 41544
Jan 20 19:09:35 compute-0 sshd-session[115532]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:09:35 compute-0 systemd[1]: session-40.scope: Deactivated successfully.
Jan 20 19:09:35 compute-0 systemd[1]: session-40.scope: Consumed 27.993s CPU time.
Jan 20 19:09:35 compute-0 systemd-logind[797]: Session 40 logged out. Waiting for processes to exit.
Jan 20 19:09:35 compute-0 systemd-logind[797]: Removed session 40.
Jan 20 19:09:36 compute-0 ceph-mon[75120]: 6.5 scrub starts
Jan 20 19:09:36 compute-0 ceph-mon[75120]: 6.5 scrub ok
Jan 20 19:09:36 compute-0 ceph-mon[75120]: pgmap v325: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:38 compute-0 ceph-mon[75120]: pgmap v326: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 20 19:09:39 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 20 19:09:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:39 compute-0 ceph-mon[75120]: 6.b scrub starts
Jan 20 19:09:39 compute-0 ceph-mon[75120]: 6.b scrub ok
Jan 20 19:09:40 compute-0 ceph-mon[75120]: pgmap v327: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 20 19:09:41 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 20 19:09:41 compute-0 sshd-session[122778]: Accepted publickey for zuul from 192.168.122.30 port 55856 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:09:41 compute-0 systemd-logind[797]: New session 41 of user zuul.
Jan 20 19:09:41 compute-0 systemd[1]: Started Session 41 of User zuul.
Jan 20 19:09:41 compute-0 sshd-session[122778]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:09:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:41 compute-0 ceph-mon[75120]: 9.15 scrub starts
Jan 20 19:09:41 compute-0 ceph-mon[75120]: 9.15 scrub ok
Jan 20 19:09:41 compute-0 sudo[122931]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hydafdprswufqxiyexhqhvlfybvduzod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936181.3208845-16-121298153427557/AnsiballZ_tempfile.py'
Jan 20 19:09:41 compute-0 sudo[122931]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:42 compute-0 python3.9[122933]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 20 19:09:42 compute-0 sudo[122931]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:42 compute-0 sudo[123083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymhilovsughbxmlhdiwlwufvmgapflym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936182.1692936-28-157661710523617/AnsiballZ_stat.py'
Jan 20 19:09:42 compute-0 sudo[123083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:42 compute-0 python3.9[123085]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:09:42 compute-0 sudo[123083]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:42 compute-0 ceph-mon[75120]: pgmap v328: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:43 compute-0 sudo[123237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jggasvuaotwhmlczvedhrgfardbgljxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936182.8808312-36-68802943245509/AnsiballZ_slurp.py'
Jan 20 19:09:43 compute-0 sudo[123237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:43 compute-0 python3.9[123239]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Jan 20 19:09:43 compute-0 sudo[123237]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:43 compute-0 sudo[123389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnajouzgnawdicstibcbyuukdlncwlkp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936183.642505-44-41799903480127/AnsiballZ_stat.py'
Jan 20 19:09:43 compute-0 sudo[123389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:44 compute-0 python3.9[123391]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.01etno5f follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:09:44 compute-0 sudo[123389]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:44 compute-0 ceph-mon[75120]: pgmap v329: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:09:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:09:44 compute-0 sudo[123514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sersxyvvpzfmxswktmttqeotytcvpniv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936183.642505-44-41799903480127/AnsiballZ_copy.py'
Jan 20 19:09:44 compute-0 sudo[123514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:44 compute-0 python3.9[123516]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.01etno5f mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936183.642505-44-41799903480127/.source.01etno5f _original_basename=.7ojkxo3k follow=False checksum=e99902dd0defb60b71293d8fd634ed68435b6950 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:44 compute-0 sudo[123514]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 20 19:09:45 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 20 19:09:45 compute-0 ceph-mon[75120]: 9.14 scrub starts
Jan 20 19:09:45 compute-0 ceph-mon[75120]: 9.14 scrub ok
Jan 20 19:09:45 compute-0 sudo[123666]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ycyipfolxpftcfeqckwsiupenrxnsmvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936184.8868885-59-19567981526970/AnsiballZ_setup.py'
Jan 20 19:09:45 compute-0 sudo[123666]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:45 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 20 19:09:45 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 20 19:09:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:45 compute-0 python3.9[123668]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:09:45 compute-0 sudo[123666]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:46 compute-0 sudo[123818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqozwqkfioxlzuduxgirnruguvbzwxeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936185.9061046-68-270067720675318/AnsiballZ_blockinfile.py'
Jan 20 19:09:46 compute-0 sudo[123818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:46 compute-0 ceph-mon[75120]: 6.9 scrub starts
Jan 20 19:09:46 compute-0 ceph-mon[75120]: 6.9 scrub ok
Jan 20 19:09:46 compute-0 ceph-mon[75120]: pgmap v330: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:46 compute-0 python3.9[123820]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCz3b07HV3uJtYZS5SXFV7UOV5We+VhL7E4MInSTY31YDxLu74UtLEKRyupRLnE9d5cVG8e5JHiBt72dhLY2VbhACUUzWUR1aTUO/jAfEzM97GQgzgl5skY63LeYydonq3csjRREkj9YaliQuWdLTocUhfB/0t0HX525BkLTzTfdhjhDOY6NzeJUhZjMKy9uM/RZvITLdPgnYTjcLN12hAtWjUGKvAcUEfWpRW0efbUgaPSuNuRxZWXNuusp0UBopS1fv5P4Ea0VhwUmNZ0IJC3eljfUuHXRdQr6A4px/e8yVSwUILaYNL6ettCVX8HNvIxk6xmT5clWgr+Vibu+qnmAoOdOqoRYdZgH/26kU5ZMOYv8wpa/TUoXbD1ClrmNUQNjD4kSFXQtI1uhLxuNYTzf4ftLLy92oo3ENBg4Oph0Hw00CUPNDcsAgD65KYg8/Frjms4h8AUjYrV2ktrqAPVEvcItbD5e7/cAcF1AnB9aHpNzgUo1iUbMmXN2/I/fQ0=
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIM5Jhg8QlHJt93+bopoKxGN+UwIsXQojyFhcp0nCuLCA
                                             compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCNoSkRzTUMXF81nHL5zY2fe7DfBkbvi2MFoFs1WurMuV9pkgr/kpqf2yHrz5D04ncV4FFj7hs+/ZPi7NjXPcIw=
                                              create=True mode=0644 path=/tmp/ansible.01etno5f state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:46 compute-0 sudo[123818]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:47 compute-0 sudo[123970]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plznckxeekbignnxriwovxtsniaqtxtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936186.6514978-76-155560954666897/AnsiballZ_command.py'
Jan 20 19:09:47 compute-0 sudo[123970]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:47 compute-0 python3.9[123972]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.01etno5f' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:09:47 compute-0 sudo[123970]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:47 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 20 19:09:47 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 20 19:09:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:47 compute-0 sudo[124124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlyoseuesezqzkilcywazhzaokqenwdk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936187.450791-84-269034960017260/AnsiballZ_file.py'
Jan 20 19:09:47 compute-0 sudo[124124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:48 compute-0 python3.9[124126]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.01etno5f state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:48 compute-0 sudo[124124]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:48 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 20 19:09:48 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 20 19:09:48 compute-0 sshd-session[122781]: Connection closed by 192.168.122.30 port 55856
Jan 20 19:09:48 compute-0 sshd-session[122778]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:09:48 compute-0 systemd-logind[797]: Session 41 logged out. Waiting for processes to exit.
Jan 20 19:09:48 compute-0 systemd[1]: session-41.scope: Deactivated successfully.
Jan 20 19:09:48 compute-0 systemd[1]: session-41.scope: Consumed 4.832s CPU time.
Jan 20 19:09:48 compute-0 systemd-logind[797]: Removed session 41.
Jan 20 19:09:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:48 compute-0 ceph-mon[75120]: 6.7 scrub starts
Jan 20 19:09:48 compute-0 ceph-mon[75120]: 6.7 scrub ok
Jan 20 19:09:48 compute-0 ceph-mon[75120]: pgmap v331: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:49 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 20 19:09:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:49 compute-0 ceph-mon[75120]: 6.3 scrub starts
Jan 20 19:09:49 compute-0 ceph-mon[75120]: 6.3 scrub ok
Jan 20 19:09:50 compute-0 ceph-mon[75120]: pgmap v332: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:52 compute-0 ceph-mon[75120]: pgmap v333: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:53 compute-0 sshd-session[124153]: Accepted publickey for zuul from 192.168.122.30 port 39124 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:09:53 compute-0 systemd-logind[797]: New session 42 of user zuul.
Jan 20 19:09:53 compute-0 systemd[1]: Started Session 42 of User zuul.
Jan 20 19:09:53 compute-0 sshd-session[124153]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:09:54 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 20 19:09:54 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 20 19:09:54 compute-0 ceph-mon[75120]: pgmap v334: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 20 19:09:55 compute-0 python3.9[124306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:09:55 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 20 19:09:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:55 compute-0 ceph-mon[75120]: 6.0 scrub starts
Jan 20 19:09:55 compute-0 ceph-mon[75120]: 6.0 scrub ok
Jan 20 19:09:55 compute-0 ceph-mon[75120]: 9.10 scrub starts
Jan 20 19:09:55 compute-0 ceph-mon[75120]: 9.10 scrub ok
Jan 20 19:09:55 compute-0 sudo[124460]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kleynvdsdlyxfkpzhpkruthttkwltoee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936195.3856206-27-279717407967965/AnsiballZ_systemd.py'
Jan 20 19:09:55 compute-0 sudo[124460]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:56 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 20 19:09:56 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 20 19:09:56 compute-0 python3.9[124462]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 19:09:56 compute-0 sudo[124460]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:56 compute-0 sudo[124614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrqrglbmfjvfympcjebhaapaizjgqixu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936196.4885504-35-170178412447296/AnsiballZ_systemd.py'
Jan 20 19:09:56 compute-0 sudo[124614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:56 compute-0 ceph-mon[75120]: pgmap v335: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:56 compute-0 ceph-mon[75120]: 9.12 scrub starts
Jan 20 19:09:56 compute-0 ceph-mon[75120]: 9.12 scrub ok
Jan 20 19:09:56 compute-0 sudo[124617]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:09:56 compute-0 sudo[124617]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:56 compute-0 sudo[124617]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:56 compute-0 sudo[124642]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:09:56 compute-0 sudo[124642]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:57 compute-0 python3.9[124616]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:09:57 compute-0 sudo[124614]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:57 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 20 19:09:57 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 20 19:09:57 compute-0 sudo[124642]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:09:57 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:09:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:09:57 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:09:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:09:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:57 compute-0 sudo[124849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spulmhlqwprdmkvzladljfjzkekwhbxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936197.3071578-44-58999551550691/AnsiballZ_command.py'
Jan 20 19:09:57 compute-0 sudo[124849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:57 compute-0 python3.9[124851]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:09:57 compute-0 sudo[124849]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:58 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:09:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:09:58 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:09:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:09:58 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:09:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:09:58 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:09:58 compute-0 sudo[124929]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:09:58 compute-0 sudo[124929]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:58 compute-0 sudo[124929]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:58 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 20 19:09:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:09:58 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:09:58 compute-0 ceph-mon[75120]: pgmap v336: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:58 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 20 19:09:58 compute-0 sudo[124977]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:09:58 compute-0 sudo[124977]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:58 compute-0 sudo[125052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyvzwaaedxhhsrvrehzbkpobdwfdfcws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936198.127736-52-66588326940225/AnsiballZ_stat.py'
Jan 20 19:09:58 compute-0 sudo[125052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:09:58 compute-0 python3.9[125054]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:09:58 compute-0 sudo[125052]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:58 compute-0 podman[125067]: 2026-01-20 19:09:58.80208183 +0000 UTC m=+0.050001549 container create 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle)
Jan 20 19:09:58 compute-0 systemd[1]: Started libpod-conmon-1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4.scope.
Jan 20 19:09:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:58 compute-0 podman[125067]: 2026-01-20 19:09:58.777610864 +0000 UTC m=+0.025530613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:09:58 compute-0 podman[125067]: 2026-01-20 19:09:58.888043117 +0000 UTC m=+0.135962856 container init 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:09:58 compute-0 podman[125067]: 2026-01-20 19:09:58.898697681 +0000 UTC m=+0.146617400 container start 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:09:58 compute-0 podman[125067]: 2026-01-20 19:09:58.902731971 +0000 UTC m=+0.150651710 container attach 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 19:09:58 compute-0 vigilant_kalam[125083]: 167 167
Jan 20 19:09:58 compute-0 systemd[1]: libpod-1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4.scope: Deactivated successfully.
Jan 20 19:09:58 compute-0 podman[125067]: 2026-01-20 19:09:58.906847523 +0000 UTC m=+0.154767252 container died 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:09:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f28e1f526127f0c632dd6a1ea6629485740b65235f306b0377de782ddaf7a89-merged.mount: Deactivated successfully.
Jan 20 19:09:58 compute-0 podman[125067]: 2026-01-20 19:09:58.947182581 +0000 UTC m=+0.195102300 container remove 1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigilant_kalam, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:09:58 compute-0 systemd[1]: libpod-conmon-1b65a080912ca0ad37e067dcdb063684192750c41a7bb20755e4976704c434d4.scope: Deactivated successfully.
Jan 20 19:09:59 compute-0 podman[125172]: 2026-01-20 19:09:59.097427739 +0000 UTC m=+0.043261541 container create c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:09:59 compute-0 systemd[1]: Started libpod-conmon-c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def.scope.
Jan 20 19:09:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:09:59 compute-0 podman[125172]: 2026-01-20 19:09:59.165779201 +0000 UTC m=+0.111613023 container init c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:09:59 compute-0 podman[125172]: 2026-01-20 19:09:59.077182958 +0000 UTC m=+0.023016810 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:09:59 compute-0 podman[125172]: 2026-01-20 19:09:59.17824505 +0000 UTC m=+0.124078852 container start c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:09:59 compute-0 podman[125172]: 2026-01-20 19:09:59.181984052 +0000 UTC m=+0.127817874 container attach c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:09:59 compute-0 sudo[125282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txbmmiwdemsxdqnbxjbfumgezvgnimtm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936199.0075219-61-47891077265358/AnsiballZ_file.py'
Jan 20 19:09:59 compute-0 sudo[125282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:09:59 compute-0 ceph-mon[75120]: 9.11 scrub starts
Jan 20 19:09:59 compute-0 ceph-mon[75120]: 9.11 scrub ok
Jan 20 19:09:59 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:09:59 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:09:59 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:09:59 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:09:59 compute-0 ceph-mon[75120]: 9.b scrub starts
Jan 20 19:09:59 compute-0 ceph-mon[75120]: 9.b scrub ok
Jan 20 19:09:59 compute-0 happy_brown[125200]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:09:59 compute-0 happy_brown[125200]: --> All data devices are unavailable
Jan 20 19:09:59 compute-0 systemd[1]: libpod-c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def.scope: Deactivated successfully.
Jan 20 19:09:59 compute-0 podman[125172]: 2026-01-20 19:09:59.630296766 +0000 UTC m=+0.576130568 container died c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:09:59 compute-0 python3.9[125285]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:09:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-be8f4142d9ea1c43e0ee30833786f5c8ed7514e826fde8c81c8cf810275e5535-merged.mount: Deactivated successfully.
Jan 20 19:09:59 compute-0 sudo[125282]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:59 compute-0 podman[125172]: 2026-01-20 19:09:59.682170071 +0000 UTC m=+0.628003893 container remove c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=happy_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 19:09:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:09:59 compute-0 systemd[1]: libpod-conmon-c430165f8433bb84e85015c2f7ac41d7c462b351a8e5d6a36a5f88573fd12def.scope: Deactivated successfully.
Jan 20 19:09:59 compute-0 sudo[124977]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:59 compute-0 sudo[125330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:09:59 compute-0 sudo[125330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:59 compute-0 sudo[125330]: pam_unix(sudo:session): session closed for user root
Jan 20 19:09:59 compute-0 sudo[125358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:09:59 compute-0 sudo[125358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:09:59 compute-0 sshd-session[124156]: Connection closed by 192.168.122.30 port 39124
Jan 20 19:09:59 compute-0 sshd-session[124153]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:09:59 compute-0 systemd[1]: session-42.scope: Deactivated successfully.
Jan 20 19:09:59 compute-0 systemd[1]: session-42.scope: Consumed 4.067s CPU time.
Jan 20 19:09:59 compute-0 systemd-logind[797]: Session 42 logged out. Waiting for processes to exit.
Jan 20 19:10:00 compute-0 systemd-logind[797]: Removed session 42.
Jan 20 19:10:00 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 20 19:10:00 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 20 19:10:00 compute-0 podman[125395]: 2026-01-20 19:10:00.146799689 +0000 UTC m=+0.042692847 container create fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:10:00 compute-0 systemd[1]: Started libpod-conmon-fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81.scope.
Jan 20 19:10:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:00 compute-0 podman[125395]: 2026-01-20 19:10:00.130109276 +0000 UTC m=+0.026002464 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:10:00 compute-0 podman[125395]: 2026-01-20 19:10:00.231098796 +0000 UTC m=+0.126991974 container init fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:10:00 compute-0 podman[125395]: 2026-01-20 19:10:00.239624567 +0000 UTC m=+0.135517725 container start fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:10:00 compute-0 podman[125395]: 2026-01-20 19:10:00.243353828 +0000 UTC m=+0.139247006 container attach fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:10:00 compute-0 adoring_aryabhata[125411]: 167 167
Jan 20 19:10:00 compute-0 systemd[1]: libpod-fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81.scope: Deactivated successfully.
Jan 20 19:10:00 compute-0 conmon[125411]: conmon fdb67046f0b76a2fe36d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81.scope/container/memory.events
Jan 20 19:10:00 compute-0 podman[125395]: 2026-01-20 19:10:00.247079151 +0000 UTC m=+0.142972309 container died fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-0503282bc40603d8e765c9786ee821937a5e99bf01cf432e1538a7e0f9df7247-merged.mount: Deactivated successfully.
Jan 20 19:10:00 compute-0 podman[125395]: 2026-01-20 19:10:00.287696726 +0000 UTC m=+0.183589884 container remove fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=adoring_aryabhata, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 19:10:00 compute-0 systemd[1]: libpod-conmon-fdb67046f0b76a2fe36d726174e4844a8dd41c1deb073fcfcb0e5ef04129bc81.scope: Deactivated successfully.
Jan 20 19:10:00 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 20 19:10:00 compute-0 podman[125435]: 2026-01-20 19:10:00.460263956 +0000 UTC m=+0.052298834 container create 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:10:00 compute-0 ceph-mon[75120]: pgmap v337: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:00 compute-0 ceph-mon[75120]: 9.2 scrub starts
Jan 20 19:10:00 compute-0 ceph-mon[75120]: 9.2 scrub ok
Jan 20 19:10:00 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 20 19:10:00 compute-0 systemd[1]: Started libpod-conmon-67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6.scope.
Jan 20 19:10:00 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:00 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:00 compute-0 podman[125435]: 2026-01-20 19:10:00.435694529 +0000 UTC m=+0.027729437 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:10:00 compute-0 podman[125435]: 2026-01-20 19:10:00.532990226 +0000 UTC m=+0.125025174 container init 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:10:00 compute-0 podman[125435]: 2026-01-20 19:10:00.538444582 +0000 UTC m=+0.130479480 container start 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:10:00 compute-0 podman[125435]: 2026-01-20 19:10:00.542582384 +0000 UTC m=+0.134617282 container attach 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20251030)
Jan 20 19:10:00 compute-0 angry_lederberg[125452]: {
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:     "0": [
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:         {
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "devices": [
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "/dev/loop3"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             ],
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_name": "ceph_lv0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_size": "21470642176",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "name": "ceph_lv0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "tags": {
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cluster_name": "ceph",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.crush_device_class": "",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.encrypted": "0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.objectstore": "bluestore",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osd_id": "0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.type": "block",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.vdo": "0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.with_tpm": "0"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             },
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "type": "block",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "vg_name": "ceph_vg0"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:         }
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:     ],
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:     "1": [
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:         {
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "devices": [
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "/dev/loop4"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             ],
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_name": "ceph_lv1",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_size": "21470642176",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "name": "ceph_lv1",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "tags": {
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cluster_name": "ceph",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.crush_device_class": "",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.encrypted": "0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.objectstore": "bluestore",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osd_id": "1",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.type": "block",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.vdo": "0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.with_tpm": "0"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             },
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "type": "block",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "vg_name": "ceph_vg1"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:         }
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:     ],
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:     "2": [
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:         {
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "devices": [
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "/dev/loop5"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             ],
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_name": "ceph_lv2",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_size": "21470642176",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "name": "ceph_lv2",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "tags": {
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.cluster_name": "ceph",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.crush_device_class": "",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.encrypted": "0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.objectstore": "bluestore",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osd_id": "2",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.type": "block",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.vdo": "0",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:                 "ceph.with_tpm": "0"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             },
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "type": "block",
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:             "vg_name": "ceph_vg2"
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:         }
Jan 20 19:10:00 compute-0 angry_lederberg[125452]:     ]
Jan 20 19:10:00 compute-0 angry_lederberg[125452]: }
Jan 20 19:10:00 compute-0 systemd[1]: libpod-67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6.scope: Deactivated successfully.
Jan 20 19:10:00 compute-0 podman[125435]: 2026-01-20 19:10:00.846537546 +0000 UTC m=+0.438572434 container died 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:10:00 compute-0 systemd[1]: var-lib-containers-storage-overlay-aa804a0693ae7689f9abb9a2909dea1016681f6fc5011c9229ecb6f48929f699-merged.mount: Deactivated successfully.
Jan 20 19:10:00 compute-0 podman[125435]: 2026-01-20 19:10:00.905168327 +0000 UTC m=+0.497203255 container remove 67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:10:00 compute-0 systemd[1]: libpod-conmon-67e15c8d738827862509077ead696f6bbac82a159cd36a2d52d6e912d0d89cc6.scope: Deactivated successfully.
Jan 20 19:10:00 compute-0 sudo[125358]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:01 compute-0 sudo[125472]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:10:01 compute-0 sudo[125472]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:01 compute-0 sudo[125472]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:01 compute-0 sudo[125497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:10:01 compute-0 sudo[125497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:01 compute-0 podman[125535]: 2026-01-20 19:10:01.414235666 +0000 UTC m=+0.059169806 container create 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True)
Jan 20 19:10:01 compute-0 systemd[1]: Started libpod-conmon-6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d.scope.
Jan 20 19:10:01 compute-0 ceph-mon[75120]: 9.16 scrub starts
Jan 20 19:10:01 compute-0 ceph-mon[75120]: 9.16 scrub ok
Jan 20 19:10:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:01 compute-0 podman[125535]: 2026-01-20 19:10:01.392838956 +0000 UTC m=+0.037773156 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:10:01 compute-0 podman[125535]: 2026-01-20 19:10:01.497847745 +0000 UTC m=+0.142781915 container init 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:10:01 compute-0 podman[125535]: 2026-01-20 19:10:01.504857768 +0000 UTC m=+0.149791918 container start 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:10:01 compute-0 podman[125535]: 2026-01-20 19:10:01.508244102 +0000 UTC m=+0.153178272 container attach 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:10:01 compute-0 gallant_meninsky[125552]: 167 167
Jan 20 19:10:01 compute-0 systemd[1]: libpod-6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d.scope: Deactivated successfully.
Jan 20 19:10:01 compute-0 podman[125535]: 2026-01-20 19:10:01.510999581 +0000 UTC m=+0.155933741 container died 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:10:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-c4e12ffd98d344e19f5d459d015af7a524526635d240447d3f95bde3bc7b46ae-merged.mount: Deactivated successfully.
Jan 20 19:10:01 compute-0 podman[125535]: 2026-01-20 19:10:01.551313359 +0000 UTC m=+0.196247509 container remove 6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=gallant_meninsky, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:10:01 compute-0 systemd[1]: libpod-conmon-6471c6b9e0818bd548b0f85e49d56afaba54fb7f07214328415e496b9cdf428d.scope: Deactivated successfully.
Jan 20 19:10:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:01 compute-0 podman[125577]: 2026-01-20 19:10:01.726043122 +0000 UTC m=+0.043263142 container create 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 19:10:01 compute-0 systemd[1]: Started libpod-conmon-867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839.scope.
Jan 20 19:10:01 compute-0 podman[125577]: 2026-01-20 19:10:01.70533993 +0000 UTC m=+0.022559970 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:10:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:10:01 compute-0 podman[125577]: 2026-01-20 19:10:01.823033773 +0000 UTC m=+0.140253803 container init 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:10:01 compute-0 podman[125577]: 2026-01-20 19:10:01.828727504 +0000 UTC m=+0.145947514 container start 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:10:01 compute-0 podman[125577]: 2026-01-20 19:10:01.832351383 +0000 UTC m=+0.149571413 container attach 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:10:01 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 20 19:10:02 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 20 19:10:02 compute-0 ceph-mon[75120]: pgmap v338: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:02 compute-0 ceph-mon[75120]: 9.0 scrub starts
Jan 20 19:10:02 compute-0 ceph-mon[75120]: 9.0 scrub ok
Jan 20 19:10:02 compute-0 lvm[125673]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:10:02 compute-0 lvm[125670]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:10:02 compute-0 lvm[125673]: VG ceph_vg1 finished
Jan 20 19:10:02 compute-0 lvm[125670]: VG ceph_vg0 finished
Jan 20 19:10:02 compute-0 lvm[125675]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:10:02 compute-0 lvm[125675]: VG ceph_vg2 finished
Jan 20 19:10:02 compute-0 naughty_allen[125594]: {}
Jan 20 19:10:02 compute-0 systemd[1]: libpod-867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839.scope: Deactivated successfully.
Jan 20 19:10:02 compute-0 systemd[1]: libpod-867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839.scope: Consumed 1.450s CPU time.
Jan 20 19:10:02 compute-0 podman[125577]: 2026-01-20 19:10:02.731897455 +0000 UTC m=+1.049117465 container died 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:10:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff11b03b9e643ddc06a49f7cd1ab4f7d901195d6a092ae391bf3bd71e42d280e-merged.mount: Deactivated successfully.
Jan 20 19:10:02 compute-0 podman[125577]: 2026-01-20 19:10:02.781554924 +0000 UTC m=+1.098774924 container remove 867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=naughty_allen, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:10:02 compute-0 systemd[1]: libpod-conmon-867091af7287f123311217e7b369cbfce57db8a8db41fe32a07628e2cafe9839.scope: Deactivated successfully.
Jan 20 19:10:02 compute-0 sudo[125497]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:10:02 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:10:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:10:02 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:10:02 compute-0 sudo[125689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:10:02 compute-0 sudo[125689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:10:02 compute-0 sudo[125689]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:02 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 20 19:10:03 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 20 19:10:03 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 20 19:10:03 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 20 19:10:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:10:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:10:03 compute-0 ceph-mon[75120]: 9.a scrub starts
Jan 20 19:10:03 compute-0 ceph-mon[75120]: 9.a scrub ok
Jan 20 19:10:03 compute-0 ceph-mon[75120]: pgmap v339: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:04 compute-0 sshd-session[125714]: Accepted publickey for zuul from 192.168.122.30 port 47752 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:10:04 compute-0 systemd-logind[797]: New session 43 of user zuul.
Jan 20 19:10:04 compute-0 systemd[1]: Started Session 43 of User zuul.
Jan 20 19:10:04 compute-0 sshd-session[125714]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:10:04 compute-0 ceph-mon[75120]: 9.5 scrub starts
Jan 20 19:10:04 compute-0 ceph-mon[75120]: 9.5 scrub ok
Jan 20 19:10:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 20 19:10:05 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 20 19:10:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:05 compute-0 python3.9[125867]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:10:05 compute-0 ceph-mon[75120]: 9.1a scrub starts
Jan 20 19:10:05 compute-0 ceph-mon[75120]: 9.1a scrub ok
Jan 20 19:10:05 compute-0 ceph-mon[75120]: pgmap v340: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 20 19:10:06 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 20 19:10:06 compute-0 sudo[126021]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krxuuovirkxvdnhishoqfqztvlkrpdmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936206.1447208-29-30094493095877/AnsiballZ_setup.py'
Jan 20 19:10:06 compute-0 sudo[126021]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:06 compute-0 python3.9[126023]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:10:06 compute-0 ceph-mon[75120]: 9.4 scrub starts
Jan 20 19:10:06 compute-0 ceph-mon[75120]: 9.4 scrub ok
Jan 20 19:10:06 compute-0 sudo[126021]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:07 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 20 19:10:07 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 20 19:10:07 compute-0 sudo[126105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvbycokgqfkokoeteaofdygtzmwhqmlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936206.1447208-29-30094493095877/AnsiballZ_dnf.py'
Jan 20 19:10:07 compute-0 sudo[126105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:07 compute-0 python3.9[126107]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 20 19:10:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:07 compute-0 ceph-mon[75120]: pgmap v341: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:08 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 20 19:10:08 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 20 19:10:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:08 compute-0 ceph-mon[75120]: 9.9 scrub starts
Jan 20 19:10:08 compute-0 ceph-mon[75120]: 9.9 scrub ok
Jan 20 19:10:08 compute-0 ceph-mon[75120]: 9.1f scrub starts
Jan 20 19:10:08 compute-0 ceph-mon[75120]: 9.1f scrub ok
Jan 20 19:10:08 compute-0 sudo[126105]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:09 compute-0 python3.9[126258]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:10:09 compute-0 ceph-mon[75120]: pgmap v342: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:10 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 20 19:10:10 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 20 19:10:10 compute-0 python3.9[126409]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 19:10:11 compute-0 ceph-mon[75120]: 9.d scrub starts
Jan 20 19:10:11 compute-0 ceph-mon[75120]: 9.d scrub ok
Jan 20 19:10:11 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 20 19:10:11 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 20 19:10:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:11 compute-0 python3.9[126559]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:10:12 compute-0 python3.9[126709]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:10:12 compute-0 ceph-mon[75120]: 9.1 scrub starts
Jan 20 19:10:12 compute-0 ceph-mon[75120]: 9.1 scrub ok
Jan 20 19:10:12 compute-0 ceph-mon[75120]: pgmap v343: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:12 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 20 19:10:12 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 20 19:10:12 compute-0 sshd-session[125717]: Connection closed by 192.168.122.30 port 47752
Jan 20 19:10:12 compute-0 sshd-session[125714]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:10:12 compute-0 systemd[1]: session-43.scope: Deactivated successfully.
Jan 20 19:10:12 compute-0 systemd[1]: session-43.scope: Consumed 5.843s CPU time.
Jan 20 19:10:12 compute-0 systemd-logind[797]: Session 43 logged out. Waiting for processes to exit.
Jan 20 19:10:12 compute-0 systemd-logind[797]: Removed session 43.
Jan 20 19:10:13 compute-0 ceph-mon[75120]: 9.3 scrub starts
Jan 20 19:10:13 compute-0 ceph-mon[75120]: 9.3 scrub ok
Jan 20 19:10:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:14 compute-0 ceph-mon[75120]: pgmap v344: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:16 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 20 19:10:16 compute-0 sshd-session[71263]: Received disconnect from 38.102.83.180 port 43946:11: disconnected by user
Jan 20 19:10:16 compute-0 sshd-session[71263]: Disconnected from user zuul 38.102.83.180 port 43946
Jan 20 19:10:16 compute-0 sshd-session[71260]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:10:16 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 20 19:10:16 compute-0 systemd[1]: session-18.scope: Consumed 1min 55.829s CPU time.
Jan 20 19:10:16 compute-0 systemd-logind[797]: Session 18 logged out. Waiting for processes to exit.
Jan 20 19:10:16 compute-0 systemd-logind[797]: Removed session 18.
Jan 20 19:10:16 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 20 19:10:16 compute-0 ceph-mon[75120]: pgmap v345: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:17 compute-0 ceph-mon[75120]: 9.1c scrub starts
Jan 20 19:10:17 compute-0 ceph-mon[75120]: 9.1c scrub ok
Jan 20 19:10:17 compute-0 ceph-mon[75120]: pgmap v346: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:18 compute-0 sshd-session[126734]: Accepted publickey for zuul from 192.168.122.30 port 53344 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:10:18 compute-0 systemd-logind[797]: New session 44 of user zuul.
Jan 20 19:10:18 compute-0 systemd[1]: Started Session 44 of User zuul.
Jan 20 19:10:18 compute-0 sshd-session[126734]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:10:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:19 compute-0 python3.9[126887]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:10:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:20 compute-0 sudo[127041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwkiugavugfxggnkvofjfrvyibsewpqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936220.1044843-45-127305009470201/AnsiballZ_file.py'
Jan 20 19:10:20 compute-0 sudo[127041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:20 compute-0 python3.9[127043]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:21 compute-0 sudo[127041]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:21 compute-0 ceph-mon[75120]: pgmap v347: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:21 compute-0 sudo[127193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdaywhkjjeejqwodktzpmhcnuibeiplr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936221.226396-45-96258549419571/AnsiballZ_file.py'
Jan 20 19:10:21 compute-0 sudo[127193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:21 compute-0 python3.9[127195]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:21 compute-0 sudo[127193]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:22 compute-0 ceph-mon[75120]: pgmap v348: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:22 compute-0 sudo[127345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vapwrrfjmtwglfdpsooxfuopbuhxnbho ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936221.8017986-60-33565049330087/AnsiballZ_stat.py'
Jan 20 19:10:22 compute-0 sudo[127345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:22 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 20 19:10:22 compute-0 python3.9[127347]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:22 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 20 19:10:22 compute-0 sudo[127345]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:22 compute-0 sudo[127468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fholyxdurcqtcyocfryuwpmshodnqizz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936221.8017986-60-33565049330087/AnsiballZ_copy.py'
Jan 20 19:10:22 compute-0 sudo[127468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:23 compute-0 python3.9[127470]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936221.8017986-60-33565049330087/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=d4257a70fdd0e32e402a88c76489fb75b7e683f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:23 compute-0 sudo[127468]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:23 compute-0 ceph-mon[75120]: 9.1d scrub starts
Jan 20 19:10:23 compute-0 ceph-mon[75120]: 9.1d scrub ok
Jan 20 19:10:23 compute-0 sudo[127620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wvswhmlklreyqtqydzxabouaypcqycqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936223.1394405-60-215435658161296/AnsiballZ_stat.py'
Jan 20 19:10:23 compute-0 sudo[127620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:23 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 20 19:10:23 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 20 19:10:23 compute-0 python3.9[127622]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:23 compute-0 sudo[127620]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:23 compute-0 sudo[127743]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqnpimrjhkhnxmmsydyfpeqewslstnum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936223.1394405-60-215435658161296/AnsiballZ_copy.py'
Jan 20 19:10:23 compute-0 sudo[127743]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:24 compute-0 python3.9[127745]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936223.1394405-60-215435658161296/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=152233ee71f040918347d87ff03f6885e159af40 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:24 compute-0 sudo[127743]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:24 compute-0 ceph-mon[75120]: 9.1b scrub starts
Jan 20 19:10:24 compute-0 ceph-mon[75120]: 9.1b scrub ok
Jan 20 19:10:24 compute-0 ceph-mon[75120]: pgmap v349: 305 pgs: 305 active+clean; 462 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:24 compute-0 sudo[127895]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpukzhugqrnzowbmfivdihxoxvxlshvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936224.1759803-60-239020377512394/AnsiballZ_stat.py'
Jan 20 19:10:24 compute-0 sudo[127895]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:24 compute-0 python3.9[127897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:24 compute-0 sudo[127895]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:24 compute-0 sudo[128018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smlsijngzmxgpwccgxjwghrjjhznxgob ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936224.1759803-60-239020377512394/AnsiballZ_copy.py'
Jan 20 19:10:24 compute-0 sudo[128018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:25 compute-0 python3.9[128020]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936224.1759803-60-239020377512394/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9453dc17e0dd9df101138e7ca8744fe471f47316 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:25 compute-0 sudo[128018]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:25 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 20 19:10:25 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 20 19:10:25 compute-0 sudo[128170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnftskfgfdxlfbbouhzzubktogidfgrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936225.322641-104-4972267514852/AnsiballZ_file.py'
Jan 20 19:10:25 compute-0 sudo[128170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:25 compute-0 python3.9[128172]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:25 compute-0 sudo[128170]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:26 compute-0 sudo[128322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmkvgbezsgzbwrbrsbawebjzqdfdhbqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936225.8989863-104-25807417793011/AnsiballZ_file.py'
Jan 20 19:10:26 compute-0 sudo[128322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:26 compute-0 python3.9[128324]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:26 compute-0 sudo[128322]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:26 compute-0 sudo[128474]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnuogisflpwawjxpiiafgdgpvidvnthd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936226.4822671-119-21557831655293/AnsiballZ_stat.py'
Jan 20 19:10:26 compute-0 sudo[128474]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:26 compute-0 ceph-mon[75120]: 9.1e scrub starts
Jan 20 19:10:26 compute-0 ceph-mon[75120]: 9.1e scrub ok
Jan 20 19:10:26 compute-0 ceph-mon[75120]: pgmap v350: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:26 compute-0 python3.9[128476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:26 compute-0 sudo[128474]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:27 compute-0 sudo[128597]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxfpgnbbhzpftsqyvslmkoqhyzwvzbzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936226.4822671-119-21557831655293/AnsiballZ_copy.py'
Jan 20 19:10:27 compute-0 sudo[128597]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:27 compute-0 python3.9[128599]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936226.4822671-119-21557831655293/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=72e76094c7443781bf758a7464981f2b70fe5291 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:27 compute-0 sudo[128597]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:27 compute-0 sudo[128749]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqxbdhgscfpdnkqtnasxsieqthfivfoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936227.5784624-119-79455915135213/AnsiballZ_stat.py'
Jan 20 19:10:27 compute-0 sudo[128749]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:27 compute-0 python3.9[128751]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:27 compute-0 sudo[128749]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:28 compute-0 sudo[128872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhuvadmirreoljvnzlpcjgrrmmwgubyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936227.5784624-119-79455915135213/AnsiballZ_copy.py'
Jan 20 19:10:28 compute-0 sudo[128872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:28 compute-0 python3.9[128874]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936227.5784624-119-79455915135213/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a18bf0ee72aa50109151ff784db14fca75746767 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:28 compute-0 sudo[128872]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:28 compute-0 ceph-mon[75120]: pgmap v351: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:28 compute-0 sudo[129024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgdcklpncjrbsrquqvpadqzomzkoyiig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936228.631237-119-83399760921554/AnsiballZ_stat.py'
Jan 20 19:10:28 compute-0 sudo[129024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:29 compute-0 python3.9[129026]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:29 compute-0 sudo[129024]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:29 compute-0 sudo[129147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iazegszuxsdhlqufduxohulwpcmwmslc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936228.631237-119-83399760921554/AnsiballZ_copy.py'
Jan 20 19:10:29 compute-0 sudo[129147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:29 compute-0 python3.9[129149]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936228.631237-119-83399760921554/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d703e43b59f2c47bf9794e81afbf179a565c6333 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:29 compute-0 sudo[129147]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:29 compute-0 sudo[129299]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knppgtnviytcxgvtkrpkjbkwvmyozkdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936229.6822855-163-130095532840677/AnsiballZ_file.py'
Jan 20 19:10:29 compute-0 sudo[129299]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:30 compute-0 python3.9[129301]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:30 compute-0 sudo[129299]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:30 compute-0 sudo[129451]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmislyvbiziqnhsibxbavmczgowacsct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936230.321634-163-49160205991430/AnsiballZ_file.py'
Jan 20 19:10:30 compute-0 sudo[129451]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:30 compute-0 ceph-mon[75120]: pgmap v352: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:30 compute-0 python3.9[129453]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:30 compute-0 sudo[129451]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:31 compute-0 sudo[129603]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hregaxaibaqnonnnrehireffnrqcufvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936231.0116086-178-149965609728351/AnsiballZ_stat.py'
Jan 20 19:10:31 compute-0 sudo[129603]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:31 compute-0 python3.9[129605]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:31 compute-0 sudo[129603]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:10:31
Jan 20 19:10:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:10:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:10:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['volumes', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images', 'backups']
Jan 20 19:10:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:10:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:31 compute-0 sudo[129726]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeefylilxgcexiwrjfdzqrqnyuutcbbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936231.0116086-178-149965609728351/AnsiballZ_copy.py'
Jan 20 19:10:31 compute-0 sudo[129726]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:32 compute-0 python3.9[129728]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936231.0116086-178-149965609728351/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=85f679b0dc57f98e831d1c0dde8acc81b42034a0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:32 compute-0 sudo[129726]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:32 compute-0 sudo[129878]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvpuhgvgykcoetntzgwexslzkjyghlro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936232.1903186-178-191228168464007/AnsiballZ_stat.py'
Jan 20 19:10:32 compute-0 sudo[129878]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:32 compute-0 python3.9[129880]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:32 compute-0 sudo[129878]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:32 compute-0 ceph-mon[75120]: pgmap v353: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:32 compute-0 sudo[130001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ryicicjqhrxfecwtnlohrmdgeqvuybty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936232.1903186-178-191228168464007/AnsiballZ_copy.py'
Jan 20 19:10:32 compute-0 sudo[130001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:33 compute-0 python3.9[130003]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936232.1903186-178-191228168464007/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a18bf0ee72aa50109151ff784db14fca75746767 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:33 compute-0 sudo[130001]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:33 compute-0 sudo[130153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjzervxmvoihowaxtzmnivreoqaqrgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936233.258287-178-34417673641001/AnsiballZ_stat.py'
Jan 20 19:10:33 compute-0 sudo[130153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:33 compute-0 python3.9[130155]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:33 compute-0 sudo[130153]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:34 compute-0 sudo[130276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cnqwpkyjzwwrizumfqvetviesxxvtuiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936233.258287-178-34417673641001/AnsiballZ_copy.py'
Jan 20 19:10:34 compute-0 sudo[130276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:34 compute-0 python3.9[130278]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936233.258287-178-34417673641001/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=9490ef0441c17c9b1176677fb60ad630695d18c3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:34 compute-0 sudo[130276]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:10:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:10:34 compute-0 ceph-mon[75120]: pgmap v354: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:35 compute-0 sudo[130428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmcrjszmnwuyglhogvxtktlhvswgsacn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936234.9185398-238-234374364477320/AnsiballZ_file.py'
Jan 20 19:10:35 compute-0 sudo[130428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:35 compute-0 python3.9[130430]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:35 compute-0 sudo[130428]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:35 compute-0 sudo[130580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvaeznahpculkzfyfjgygjwgtstivsmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936235.482825-246-45545398084939/AnsiballZ_stat.py'
Jan 20 19:10:35 compute-0 sudo[130580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:35 compute-0 python3.9[130582]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:35 compute-0 sudo[130580]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:36 compute-0 sudo[130703]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkorvhfyxjixhpmwbzrqbngyqopcdqum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936235.482825-246-45545398084939/AnsiballZ_copy.py'
Jan 20 19:10:36 compute-0 sudo[130703]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:36 compute-0 python3.9[130705]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936235.482825-246-45545398084939/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:36 compute-0 sudo[130703]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:36 compute-0 ceph-mon[75120]: pgmap v355: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:36 compute-0 sudo[130855]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihzghrjgqoerflyyoxseehesnjbdvmmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936236.6137972-262-189047675946673/AnsiballZ_file.py'
Jan 20 19:10:36 compute-0 sudo[130855]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:37 compute-0 python3.9[130857]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:37 compute-0 sudo[130855]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:37 compute-0 sudo[131007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bigfodetcvkhprueuiimmggnruiftnss ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936237.1839015-270-252727844320334/AnsiballZ_stat.py'
Jan 20 19:10:37 compute-0 sudo[131007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:37 compute-0 python3.9[131009]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:37 compute-0 sudo[131007]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:37 compute-0 sudo[131130]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfucyujpcqtevoqjvivohmbqouaztuqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936237.1839015-270-252727844320334/AnsiballZ_copy.py'
Jan 20 19:10:37 compute-0 sudo[131130]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:38 compute-0 python3.9[131132]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936237.1839015-270-252727844320334/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:38 compute-0 sudo[131130]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:38 compute-0 sudo[131282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmcopqnjkcsldyqpjjreyjyauucflrhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936238.3177783-286-113112067246517/AnsiballZ_file.py'
Jan 20 19:10:38 compute-0 sudo[131282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:38 compute-0 python3.9[131284]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:38 compute-0 sudo[131282]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:38 compute-0 ceph-mon[75120]: pgmap v356: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:39 compute-0 sudo[131434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsbnmfkmyyfrisxazvqfldwtgcwtitac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936238.878624-294-243176634967829/AnsiballZ_stat.py'
Jan 20 19:10:39 compute-0 sudo[131434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:39 compute-0 python3.9[131436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:39 compute-0 sudo[131434]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:39 compute-0 sudo[131557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zuqjupmljbllnblpxzaubnthwrxafbky ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936238.878624-294-243176634967829/AnsiballZ_copy.py'
Jan 20 19:10:39 compute-0 sudo[131557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:39 compute-0 python3.9[131559]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936238.878624-294-243176634967829/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:39 compute-0 sudo[131557]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:40 compute-0 sudo[131709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kipuqzkxqpkjjdunceopfvwjxvaluqox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936240.109337-310-206766139913935/AnsiballZ_file.py'
Jan 20 19:10:40 compute-0 sudo[131709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:40 compute-0 python3.9[131711]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:40 compute-0 sudo[131709]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:40 compute-0 ceph-mon[75120]: pgmap v357: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:41 compute-0 sudo[131861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quockorcfknajkjxcnnoyldwqgwshekn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936240.7462938-318-258365391249608/AnsiballZ_stat.py'
Jan 20 19:10:41 compute-0 sudo[131861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:41 compute-0 python3.9[131863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:41 compute-0 sudo[131861]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:41 compute-0 sudo[131984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grssrdaoopcmfoblwbifjlgitypjzhpo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936240.7462938-318-258365391249608/AnsiballZ_copy.py'
Jan 20 19:10:41 compute-0 sudo[131984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:41 compute-0 python3.9[131986]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936240.7462938-318-258365391249608/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:41 compute-0 sudo[131984]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:42 compute-0 sudo[132136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wprmwdulenospfracvpvcfmvivzietim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936241.897616-334-210752866152893/AnsiballZ_file.py'
Jan 20 19:10:42 compute-0 sudo[132136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:42 compute-0 python3.9[132138]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:42 compute-0 sudo[132136]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:42 compute-0 sudo[132288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgifzyreyuzdxwfurkzdgfxygyltqlkh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936242.4761004-342-42346712225165/AnsiballZ_stat.py'
Jan 20 19:10:42 compute-0 sudo[132288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:42 compute-0 ceph-mon[75120]: pgmap v358: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:42 compute-0 python3.9[132290]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:42 compute-0 sudo[132288]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:43 compute-0 sudo[132411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggrawfpsudpsvnptumoapdycratsllqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936242.4761004-342-42346712225165/AnsiballZ_copy.py'
Jan 20 19:10:43 compute-0 sudo[132411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:43 compute-0 python3.9[132413]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936242.4761004-342-42346712225165/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:43 compute-0 sudo[132411]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:43 compute-0 sudo[132563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcxpnpfdrtqljtratyzieelhefrewdly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936243.6105049-358-101835077388618/AnsiballZ_file.py'
Jan 20 19:10:43 compute-0 sudo[132563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:44 compute-0 ceph-mon[75120]: pgmap v359: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:44 compute-0 python3.9[132565]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:10:44 compute-0 sudo[132563]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:44 compute-0 sudo[132715]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cuytrtakwtrwnnowsxmekdfmubbejcrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936244.1598256-366-93601247695196/AnsiballZ_stat.py'
Jan 20 19:10:44 compute-0 sudo[132715]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:10:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:10:44 compute-0 python3.9[132717]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:44 compute-0 sudo[132715]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:44 compute-0 sudo[132838]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnxkjuijpyykvbzhtyquqgqndipwarym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936244.1598256-366-93601247695196/AnsiballZ_copy.py'
Jan 20 19:10:44 compute-0 sudo[132838]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:45 compute-0 python3.9[132840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936244.1598256-366-93601247695196/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=a3ba5373cbe9b77d5caa7583160220709f3d2e75 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:45 compute-0 sudo[132838]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:45 compute-0 sshd-session[126737]: Connection closed by 192.168.122.30 port 53344
Jan 20 19:10:45 compute-0 sshd-session[126734]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:10:45 compute-0 systemd[1]: session-44.scope: Deactivated successfully.
Jan 20 19:10:45 compute-0 systemd[1]: session-44.scope: Consumed 21.174s CPU time.
Jan 20 19:10:45 compute-0 systemd-logind[797]: Session 44 logged out. Waiting for processes to exit.
Jan 20 19:10:45 compute-0 systemd-logind[797]: Removed session 44.
Jan 20 19:10:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:46 compute-0 ceph-mon[75120]: pgmap v360: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:48 compute-0 ceph-mon[75120]: pgmap v361: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:50 compute-0 sshd-session[132865]: Accepted publickey for zuul from 192.168.122.30 port 55586 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:10:50 compute-0 systemd-logind[797]: New session 45 of user zuul.
Jan 20 19:10:50 compute-0 systemd[1]: Started Session 45 of User zuul.
Jan 20 19:10:50 compute-0 sshd-session[132865]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:10:50 compute-0 ceph-mon[75120]: pgmap v362: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:51 compute-0 sudo[133018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyfdgfwscghautoevybvqiabqaadjsxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936250.6456487-17-275701374901600/AnsiballZ_file.py'
Jan 20 19:10:51 compute-0 sudo[133018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:51 compute-0 python3.9[133020]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:51 compute-0 sudo[133018]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:51 compute-0 sudo[133170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ablxaffaelpndjjbbuxrjdukfwusdoyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936251.4793594-29-186958417720475/AnsiballZ_stat.py'
Jan 20 19:10:51 compute-0 sudo[133170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:52 compute-0 python3.9[133172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:52 compute-0 sudo[133170]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:52 compute-0 sudo[133293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjicjlartthatdificwhmuxpzopvvlgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936251.4793594-29-186958417720475/AnsiballZ_copy.py'
Jan 20 19:10:52 compute-0 sudo[133293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:52 compute-0 python3.9[133295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936251.4793594-29-186958417720475/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=82f4fc7876a2f5ec58c3b05a59c81182fa299df3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:52 compute-0 sudo[133293]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:52 compute-0 ceph-mon[75120]: pgmap v363: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:53 compute-0 sudo[133445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwhggyidtqsladekrqhloiodpejnjbdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936252.794772-29-213823089534989/AnsiballZ_stat.py'
Jan 20 19:10:53 compute-0 sudo[133445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:53 compute-0 python3.9[133447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:10:53 compute-0 sudo[133445]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:53 compute-0 sudo[133568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrpuwyfchtqfrojdegtlbjczdjlgnolb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936252.794772-29-213823089534989/AnsiballZ_copy.py'
Jan 20 19:10:53 compute-0 sudo[133568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:10:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:53 compute-0 python3.9[133570]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936252.794772-29-213823089534989/.source.conf _original_basename=ceph.conf follow=False checksum=07857ecc6916485d0d36f394eaef27670eedaf2c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:10:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:53 compute-0 sudo[133568]: pam_unix(sudo:session): session closed for user root
Jan 20 19:10:54 compute-0 sshd-session[132868]: Connection closed by 192.168.122.30 port 55586
Jan 20 19:10:54 compute-0 sshd-session[132865]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:10:54 compute-0 systemd[1]: session-45.scope: Deactivated successfully.
Jan 20 19:10:54 compute-0 systemd[1]: session-45.scope: Consumed 2.450s CPU time.
Jan 20 19:10:54 compute-0 systemd-logind[797]: Session 45 logged out. Waiting for processes to exit.
Jan 20 19:10:54 compute-0 systemd-logind[797]: Removed session 45.
Jan 20 19:10:54 compute-0 ceph-mon[75120]: pgmap v364: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:55 compute-0 ceph-mon[75120]: pgmap v365: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:10:58 compute-0 ceph-mon[75120]: pgmap v366: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:10:59 compute-0 sshd-session[133595]: Accepted publickey for zuul from 192.168.122.30 port 50988 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:10:59 compute-0 systemd-logind[797]: New session 46 of user zuul.
Jan 20 19:10:59 compute-0 systemd[1]: Started Session 46 of User zuul.
Jan 20 19:10:59 compute-0 sshd-session[133595]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:10:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:00 compute-0 python3.9[133748]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:11:00 compute-0 ceph-mon[75120]: pgmap v367: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:01 compute-0 sudo[133902]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyxjwhbuapyzylpgoynmudbttvwhysfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936260.915085-29-157518125996680/AnsiballZ_file.py'
Jan 20 19:11:01 compute-0 sudo[133902]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:01 compute-0 python3.9[133904]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:11:01 compute-0 sudo[133902]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:01 compute-0 ceph-mon[75120]: pgmap v368: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:01 compute-0 sudo[134054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iywtgkbccnihzejdrfltzkkfdfweruqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936261.6842594-29-205387675737906/AnsiballZ_file.py'
Jan 20 19:11:01 compute-0 sudo[134054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:02 compute-0 python3.9[134056]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:11:02 compute-0 sudo[134054]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:02 compute-0 python3.9[134206]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:11:03 compute-0 sudo[134231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:03 compute-0 sudo[134231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:03 compute-0 sudo[134231]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:03 compute-0 sudo[134279]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:11:03 compute-0 sudo[134279]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:03 compute-0 sudo[134423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abxtcslgaoqpafgpzanvoatdheppusvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936263.009139-52-241960839364667/AnsiballZ_seboolean.py'
Jan 20 19:11:03 compute-0 sudo[134423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:03 compute-0 sudo[134279]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:11:03 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:11:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:11:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:11:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:11:03 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:11:03 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:11:03 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:11:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:11:03 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:11:03 compute-0 sudo[134440]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:03 compute-0 sudo[134440]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:03 compute-0 sudo[134440]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:03 compute-0 python3.9[134426]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 19:11:03 compute-0 sudo[134465]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:11:03 compute-0 sudo[134465]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:03 compute-0 podman[134503]: 2026-01-20 19:11:03.916917014 +0000 UTC m=+0.049143639 container create c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:11:03 compute-0 systemd[1]: Started libpod-conmon-c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda.scope.
Jan 20 19:11:03 compute-0 podman[134503]: 2026-01-20 19:11:03.891301201 +0000 UTC m=+0.023527856 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:11:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:04 compute-0 podman[134503]: 2026-01-20 19:11:04.011875453 +0000 UTC m=+0.144102098 container init c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:11:04 compute-0 podman[134503]: 2026-01-20 19:11:04.018909893 +0000 UTC m=+0.151136518 container start c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:11:04 compute-0 podman[134503]: 2026-01-20 19:11:04.023025817 +0000 UTC m=+0.155252432 container attach c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:11:04 compute-0 systemd[1]: libpod-c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda.scope: Deactivated successfully.
Jan 20 19:11:04 compute-0 infallible_mendeleev[134519]: 167 167
Jan 20 19:11:04 compute-0 conmon[134519]: conmon c2811b91f76d4bb72029 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda.scope/container/memory.events
Jan 20 19:11:04 compute-0 podman[134503]: 2026-01-20 19:11:04.026693139 +0000 UTC m=+0.158919754 container died c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:11:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-ad3021c7950182cd65172122db53e878a6471cb3767f0b4736290b10d77abf4a-merged.mount: Deactivated successfully.
Jan 20 19:11:04 compute-0 podman[134503]: 2026-01-20 19:11:04.072344158 +0000 UTC m=+0.204570783 container remove c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=infallible_mendeleev, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:11:04 compute-0 systemd[1]: libpod-conmon-c2811b91f76d4bb720291e966b74a987076bd496be3ba9c4a55587cc951e7dda.scope: Deactivated successfully.
Jan 20 19:11:04 compute-0 podman[134543]: 2026-01-20 19:11:04.236766187 +0000 UTC m=+0.051754938 container create af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:11:04 compute-0 systemd[1]: Started libpod-conmon-af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6.scope.
Jan 20 19:11:04 compute-0 podman[134543]: 2026-01-20 19:11:04.219279139 +0000 UTC m=+0.034267900 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:11:04 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 20 19:11:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:04 compute-0 podman[134543]: 2026-01-20 19:11:04.352166601 +0000 UTC m=+0.167155392 container init af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:11:04 compute-0 podman[134543]: 2026-01-20 19:11:04.361173186 +0000 UTC m=+0.176161927 container start af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:11:04 compute-0 podman[134543]: 2026-01-20 19:11:04.364713587 +0000 UTC m=+0.179702378 container attach af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:11:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:04 compute-0 ceph-mon[75120]: pgmap v369: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:04 compute-0 sudo[134423]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:04 compute-0 elastic_sanderson[134559]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:11:04 compute-0 elastic_sanderson[134559]: --> All data devices are unavailable
Jan 20 19:11:04 compute-0 systemd[1]: libpod-af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6.scope: Deactivated successfully.
Jan 20 19:11:04 compute-0 podman[134543]: 2026-01-20 19:11:04.834117861 +0000 UTC m=+0.649106602 container died af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 19:11:04 compute-0 systemd[1]: var-lib-containers-storage-overlay-d68db9520a524384f024aede44b1092b243c42e80dcaad06f632bad66d932bd9-merged.mount: Deactivated successfully.
Jan 20 19:11:04 compute-0 podman[134543]: 2026-01-20 19:11:04.913820393 +0000 UTC m=+0.728809134 container remove af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:11:04 compute-0 systemd[1]: libpod-conmon-af40efe770a391aeb596576e80eb4d076d9c0b3c6b8164da2a283bae904d9bd6.scope: Deactivated successfully.
Jan 20 19:11:04 compute-0 sudo[134465]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:05 compute-0 sudo[134631]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:05 compute-0 sudo[134631]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:05 compute-0 sudo[134631]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:05 compute-0 sudo[134688]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:11:05 compute-0 sudo[134688]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:05 compute-0 sudo[134794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fxepcloazqjviwuuflcwdnsrcubqfgtv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936264.9873297-62-65582980706279/AnsiballZ_setup.py'
Jan 20 19:11:05 compute-0 sudo[134794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:05 compute-0 podman[134807]: 2026-01-20 19:11:05.340601888 +0000 UTC m=+0.041451364 container create fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:11:05 compute-0 systemd[1]: Started libpod-conmon-fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae.scope.
Jan 20 19:11:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:05 compute-0 podman[134807]: 2026-01-20 19:11:05.321598996 +0000 UTC m=+0.022448492 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:11:05 compute-0 podman[134807]: 2026-01-20 19:11:05.417679321 +0000 UTC m=+0.118528817 container init fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:11:05 compute-0 podman[134807]: 2026-01-20 19:11:05.425001217 +0000 UTC m=+0.125850693 container start fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:11:05 compute-0 reverent_hopper[134823]: 167 167
Jan 20 19:11:05 compute-0 systemd[1]: libpod-fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae.scope: Deactivated successfully.
Jan 20 19:11:05 compute-0 podman[134807]: 2026-01-20 19:11:05.430033102 +0000 UTC m=+0.130882598 container attach fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:11:05 compute-0 podman[134807]: 2026-01-20 19:11:05.430460041 +0000 UTC m=+0.131309527 container died fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:11:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-8fb48acfa2dbf83eb9f761a16caabdc8ba5a246523ace10a8eb6bc40a40f42a0-merged.mount: Deactivated successfully.
Jan 20 19:11:05 compute-0 podman[134807]: 2026-01-20 19:11:05.462119851 +0000 UTC m=+0.162969327 container remove fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:11:05 compute-0 systemd[1]: libpod-conmon-fbe09fe050026f5a5dee7dd18e368711b352b698ec34e0a9c25aab458e0e8cae.scope: Deactivated successfully.
Jan 20 19:11:05 compute-0 python3.9[134796]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:11:05 compute-0 podman[134854]: 2026-01-20 19:11:05.647718702 +0000 UTC m=+0.069840310 container create 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:11:05 compute-0 systemd[1]: Started libpod-conmon-36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488.scope.
Jan 20 19:11:05 compute-0 podman[134854]: 2026-01-20 19:11:05.620432211 +0000 UTC m=+0.042553899 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:11:05 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:05 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:05 compute-0 podman[134854]: 2026-01-20 19:11:05.741215548 +0000 UTC m=+0.163337146 container init 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:11:05 compute-0 podman[134854]: 2026-01-20 19:11:05.748478623 +0000 UTC m=+0.170600211 container start 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 19:11:05 compute-0 podman[134854]: 2026-01-20 19:11:05.753174819 +0000 UTC m=+0.175296417 container attach 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 19:11:05 compute-0 sudo[134794]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:06 compute-0 goofy_moore[134872]: {
Jan 20 19:11:06 compute-0 goofy_moore[134872]:     "0": [
Jan 20 19:11:06 compute-0 goofy_moore[134872]:         {
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "devices": [
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "/dev/loop3"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             ],
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_name": "ceph_lv0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_size": "21470642176",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "name": "ceph_lv0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "tags": {
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cluster_name": "ceph",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.crush_device_class": "",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.encrypted": "0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.objectstore": "bluestore",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osd_id": "0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.type": "block",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.vdo": "0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.with_tpm": "0"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             },
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "type": "block",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "vg_name": "ceph_vg0"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:         }
Jan 20 19:11:06 compute-0 goofy_moore[134872]:     ],
Jan 20 19:11:06 compute-0 goofy_moore[134872]:     "1": [
Jan 20 19:11:06 compute-0 goofy_moore[134872]:         {
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "devices": [
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "/dev/loop4"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             ],
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_name": "ceph_lv1",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_size": "21470642176",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "name": "ceph_lv1",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "tags": {
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cluster_name": "ceph",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.crush_device_class": "",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.encrypted": "0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.objectstore": "bluestore",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osd_id": "1",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.type": "block",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.vdo": "0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.with_tpm": "0"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             },
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "type": "block",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "vg_name": "ceph_vg1"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:         }
Jan 20 19:11:06 compute-0 goofy_moore[134872]:     ],
Jan 20 19:11:06 compute-0 goofy_moore[134872]:     "2": [
Jan 20 19:11:06 compute-0 goofy_moore[134872]:         {
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "devices": [
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "/dev/loop5"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             ],
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_name": "ceph_lv2",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_size": "21470642176",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "name": "ceph_lv2",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "tags": {
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.cluster_name": "ceph",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.crush_device_class": "",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.encrypted": "0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.objectstore": "bluestore",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osd_id": "2",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.type": "block",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.vdo": "0",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:                 "ceph.with_tpm": "0"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             },
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "type": "block",
Jan 20 19:11:06 compute-0 goofy_moore[134872]:             "vg_name": "ceph_vg2"
Jan 20 19:11:06 compute-0 goofy_moore[134872]:         }
Jan 20 19:11:06 compute-0 goofy_moore[134872]:     ]
Jan 20 19:11:06 compute-0 goofy_moore[134872]: }
Jan 20 19:11:06 compute-0 systemd[1]: libpod-36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488.scope: Deactivated successfully.
Jan 20 19:11:06 compute-0 podman[134854]: 2026-01-20 19:11:06.054594154 +0000 UTC m=+0.476715762 container died 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:11:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-0428e95d26e2a972289c979f86b5ecd2862eaf4aa80e1aef8336020440c50207-merged.mount: Deactivated successfully.
Jan 20 19:11:06 compute-0 podman[134854]: 2026-01-20 19:11:06.097812877 +0000 UTC m=+0.519934465 container remove 36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=goofy_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:11:06 compute-0 systemd[1]: libpod-conmon-36bc56f2c721cb34b028ad96d0f7bd6a1b3261a475dbe056caa3565d08ac5488.scope: Deactivated successfully.
Jan 20 19:11:06 compute-0 sudo[134688]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:06 compute-0 sudo[134965]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etzchsdinjdkanmwsslmqzulqjqxkdeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936264.9873297-62-65582980706279/AnsiballZ_dnf.py'
Jan 20 19:11:06 compute-0 sudo[134965]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:06 compute-0 sudo[134966]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:11:06 compute-0 sudo[134966]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:06 compute-0 sudo[134966]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:06 compute-0 sudo[134993]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:11:06 compute-0 sudo[134993]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:06 compute-0 python3.9[134985]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:11:06 compute-0 podman[135031]: 2026-01-20 19:11:06.526865953 +0000 UTC m=+0.041069514 container create 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:11:06 compute-0 systemd[1]: Started libpod-conmon-321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d.scope.
Jan 20 19:11:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:06 compute-0 podman[135031]: 2026-01-20 19:11:06.597626602 +0000 UTC m=+0.111830183 container init 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, CEPH_REF=tentacle, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:11:06 compute-0 podman[135031]: 2026-01-20 19:11:06.507561214 +0000 UTC m=+0.021764785 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:11:06 compute-0 podman[135031]: 2026-01-20 19:11:06.606215318 +0000 UTC m=+0.120418879 container start 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Jan 20 19:11:06 compute-0 podman[135031]: 2026-01-20 19:11:06.609696027 +0000 UTC m=+0.123899608 container attach 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:11:06 compute-0 nervous_bhaskara[135047]: 167 167
Jan 20 19:11:06 compute-0 systemd[1]: libpod-321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d.scope: Deactivated successfully.
Jan 20 19:11:06 compute-0 podman[135052]: 2026-01-20 19:11:06.65511763 +0000 UTC m=+0.027659100 container died 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:11:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-1291c1fff46ff1f6f82bb846a72fbf25b581160030e7e850b5191787e42ccece-merged.mount: Deactivated successfully.
Jan 20 19:11:06 compute-0 podman[135052]: 2026-01-20 19:11:06.694388253 +0000 UTC m=+0.066929633 container remove 321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_bhaskara, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:11:06 compute-0 systemd[1]: libpod-conmon-321e0ecda54fb839b519550284e9d1905bdbbba7ef9bc1136f6370d3c65c326d.scope: Deactivated successfully.
Jan 20 19:11:06 compute-0 ceph-mon[75120]: pgmap v370: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:06 compute-0 podman[135074]: 2026-01-20 19:11:06.876584316 +0000 UTC m=+0.043748877 container create 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:11:06 compute-0 systemd[1]: Started libpod-conmon-9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2.scope.
Jan 20 19:11:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:06 compute-0 podman[135074]: 2026-01-20 19:11:06.855928446 +0000 UTC m=+0.023093057 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:11:06 compute-0 podman[135074]: 2026-01-20 19:11:06.954393535 +0000 UTC m=+0.121558126 container init 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 19:11:06 compute-0 podman[135074]: 2026-01-20 19:11:06.962839577 +0000 UTC m=+0.130004138 container start 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:11:06 compute-0 podman[135074]: 2026-01-20 19:11:06.965939908 +0000 UTC m=+0.133104479 container attach 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:11:07 compute-0 lvm[135169]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:11:07 compute-0 lvm[135169]: VG ceph_vg1 finished
Jan 20 19:11:07 compute-0 lvm[135168]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:11:07 compute-0 lvm[135168]: VG ceph_vg0 finished
Jan 20 19:11:07 compute-0 lvm[135171]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:11:07 compute-0 lvm[135171]: VG ceph_vg2 finished
Jan 20 19:11:07 compute-0 sudo[134965]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:07 compute-0 optimistic_robinson[135090]: {}
Jan 20 19:11:07 compute-0 systemd[1]: libpod-9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2.scope: Deactivated successfully.
Jan 20 19:11:07 compute-0 podman[135074]: 2026-01-20 19:11:07.790523988 +0000 UTC m=+0.957688559 container died 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:11:07 compute-0 systemd[1]: libpod-9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2.scope: Consumed 1.275s CPU time.
Jan 20 19:11:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-ca9c3fbc1a335221521a672ef3b5aa396242b8f1164333241ddee12ed15da074-merged.mount: Deactivated successfully.
Jan 20 19:11:07 compute-0 podman[135074]: 2026-01-20 19:11:07.83411486 +0000 UTC m=+1.001279421 container remove 9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=optimistic_robinson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:11:07 compute-0 systemd[1]: libpod-conmon-9e475bda626839e85b6cef364d1332d6ce32fea0e6bd94279ca9a3e67bc95fa2.scope: Deactivated successfully.
Jan 20 19:11:07 compute-0 sudo[134993]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:07 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:11:07 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:11:07 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:11:07 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:11:07 compute-0 sudo[135244]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:11:07 compute-0 sudo[135244]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:11:07 compute-0 sudo[135244]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:08 compute-0 sudo[135361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzyykjhxksxhxfhhjghpldljpfmdyhxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936267.8703012-74-191152412912656/AnsiballZ_systemd.py'
Jan 20 19:11:08 compute-0 sudo[135361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:08 compute-0 python3.9[135363]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 19:11:08 compute-0 ceph-mon[75120]: pgmap v371: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:11:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:11:08 compute-0 sudo[135361]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:09 compute-0 sudo[135516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxfrwyheydsfbqinelnmigllbvhaeyve ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768936269.0247152-82-22422619057502/AnsiballZ_edpm_nftables_snippet.py'
Jan 20 19:11:09 compute-0 sudo[135516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:09 compute-0 python3[135518]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                             rule:
                                               proto: udp
                                               dport: 4789
                                           - rule_name: 119 neutron geneve networks
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               state: ["UNTRACKED"]
                                           - rule_name: 120 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: OUTPUT
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                           - rule_name: 121 neutron geneve networks no conntrack
                                             rule:
                                               proto: udp
                                               dport: 6081
                                               table: raw
                                               chain: PREROUTING
                                               jump: NOTRACK
                                               action: append
                                               state: []
                                            dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 20 19:11:09 compute-0 sudo[135516]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:10 compute-0 sudo[135668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mizqitnzmgufsfnhwwoiggcctlagctuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936269.9171247-91-220767856721960/AnsiballZ_file.py'
Jan 20 19:11:10 compute-0 sudo[135668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:10 compute-0 python3.9[135670]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:10 compute-0 sudo[135668]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:10 compute-0 ceph-mon[75120]: pgmap v372: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:11 compute-0 sudo[135820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqopaxnstgideravqsgthpodnxrbqzdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936270.5650434-99-83358616968614/AnsiballZ_stat.py'
Jan 20 19:11:11 compute-0 sudo[135820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:11 compute-0 python3.9[135822]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:11 compute-0 sudo[135820]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:11 compute-0 sudo[135898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfkycfvgtowlrkkbuolyczmrsehwgxax ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936270.5650434-99-83358616968614/AnsiballZ_file.py'
Jan 20 19:11:11 compute-0 sudo[135898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:11 compute-0 python3.9[135900]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:11 compute-0 sudo[135898]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:12 compute-0 sudo[136050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eopjdqwzxxzywkephnoamedmjyqhtixm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936271.7814612-111-174925851233737/AnsiballZ_stat.py'
Jan 20 19:11:12 compute-0 sudo[136050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:12 compute-0 ceph-mon[75120]: pgmap v373: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:12 compute-0 python3.9[136052]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:12 compute-0 sudo[136050]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:12 compute-0 sudo[136128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vemgaqkidnektbcyexsextvzlkxrezqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936271.7814612-111-174925851233737/AnsiballZ_file.py'
Jan 20 19:11:12 compute-0 sudo[136128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:12 compute-0 python3.9[136130]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.e1ck4hfr recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:12 compute-0 sudo[136128]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:13 compute-0 sudo[136280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goddjtjqyetezawhntwdqzfjomuvgmvf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936272.808309-123-18835784299002/AnsiballZ_stat.py'
Jan 20 19:11:13 compute-0 sudo[136280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:13 compute-0 python3.9[136282]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:13 compute-0 sudo[136280]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:13 compute-0 sudo[136358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wufhofxxfxgxvrtzzbbfrsonwpjpdywv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936272.808309-123-18835784299002/AnsiballZ_file.py'
Jan 20 19:11:13 compute-0 sudo[136358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:13 compute-0 python3.9[136360]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:13 compute-0 sudo[136358]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:14 compute-0 sudo[136510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lspjxwroozqawibvutupfdefjrgqkyey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936273.881553-136-59900754026180/AnsiballZ_command.py'
Jan 20 19:11:14 compute-0 sudo[136510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:14 compute-0 python3.9[136512]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:11:14 compute-0 sudo[136510]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:14 compute-0 ceph-mon[75120]: pgmap v374: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:15 compute-0 sudo[136663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thoyozdvrlbbbfegtmlarxnaewfeadyf ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768936274.645132-144-94616712471182/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 19:11:15 compute-0 sudo[136663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:15 compute-0 python3[136665]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 19:11:15 compute-0 sudo[136663]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:15 compute-0 sudo[136815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmbzkdjwkmocxiaqkahfdjkvwjplyzgf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936275.4247189-152-10625078794513/AnsiballZ_stat.py'
Jan 20 19:11:15 compute-0 sudo[136815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:15 compute-0 python3.9[136817]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:15 compute-0 sudo[136815]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:16 compute-0 sudo[136940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejmglyvkvmlvjfshkwcpzjxqbwkpatvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936275.4247189-152-10625078794513/AnsiballZ_copy.py'
Jan 20 19:11:16 compute-0 sudo[136940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:16 compute-0 python3.9[136942]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936275.4247189-152-10625078794513/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:16 compute-0 sudo[136940]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:16 compute-0 ceph-mon[75120]: pgmap v375: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:17 compute-0 sudo[137092]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qizllixwdzqxgcaxedtuolwuiibfsgoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936276.756451-167-12054507933329/AnsiballZ_stat.py'
Jan 20 19:11:17 compute-0 sudo[137092]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:17 compute-0 python3.9[137094]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:17 compute-0 sudo[137092]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:17 compute-0 sudo[137217]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgrvvvdohmhvspvjmvabmttabopigpkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936276.756451-167-12054507933329/AnsiballZ_copy.py'
Jan 20 19:11:17 compute-0 sudo[137217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:17 compute-0 python3.9[137219]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936276.756451-167-12054507933329/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:17 compute-0 sudo[137217]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:18 compute-0 sudo[137369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xirlfubhlqqtyywfcmvxjewyhvsyfqfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936277.9738297-182-62635064492725/AnsiballZ_stat.py'
Jan 20 19:11:18 compute-0 sudo[137369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:18 compute-0 python3.9[137371]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:18 compute-0 sudo[137369]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:18 compute-0 sudo[137494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lehniudjjzcqccybcbmbvmoiuliahyxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936277.9738297-182-62635064492725/AnsiballZ_copy.py'
Jan 20 19:11:18 compute-0 sudo[137494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:18 compute-0 ceph-mon[75120]: pgmap v376: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:19 compute-0 python3.9[137496]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936277.9738297-182-62635064492725/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:19 compute-0 sudo[137494]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:19 compute-0 sudo[137646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tswxrxfljsvvvmstuqmzluqgnvbnauft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936279.2076283-197-229631044038092/AnsiballZ_stat.py'
Jan 20 19:11:19 compute-0 sudo[137646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:19 compute-0 python3.9[137648]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:19 compute-0 sudo[137646]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:19 compute-0 ceph-mon[75120]: pgmap v377: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:19 compute-0 sudo[137771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fscarfxjjeiksemibvvmhegccrpnjkcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936279.2076283-197-229631044038092/AnsiballZ_copy.py'
Jan 20 19:11:19 compute-0 sudo[137771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:20 compute-0 python3.9[137773]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936279.2076283-197-229631044038092/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:20 compute-0 sudo[137771]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:20 compute-0 sudo[137923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vyuzbjceoitwheqagdntagsanlowsvhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936280.303058-212-70113298385430/AnsiballZ_stat.py'
Jan 20 19:11:20 compute-0 sudo[137923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:20 compute-0 python3.9[137925]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:21 compute-0 sudo[137923]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:21 compute-0 sudo[138048]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhygmmasgepvukkusdjpovkgnrocjgbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936280.303058-212-70113298385430/AnsiballZ_copy.py'
Jan 20 19:11:21 compute-0 sudo[138048]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:21 compute-0 python3.9[138050]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936280.303058-212-70113298385430/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:21 compute-0 sudo[138048]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:21 compute-0 sudo[138200]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfxraxnzmwjuybupqkkpdzdmzfpzpsgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936281.7023535-227-46628348441827/AnsiballZ_file.py'
Jan 20 19:11:21 compute-0 sudo[138200]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:22 compute-0 python3.9[138202]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:22 compute-0 sudo[138200]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:22 compute-0 sudo[138352]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djpdsvjstiscisfugesswlyozdqekytl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936282.337333-235-6961025596457/AnsiballZ_command.py'
Jan 20 19:11:22 compute-0 sudo[138352]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:22 compute-0 ceph-mon[75120]: pgmap v378: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:22 compute-0 python3.9[138354]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:11:22 compute-0 sudo[138352]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:23 compute-0 sudo[138507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixhxpxnoyjfnahvnjqzhftvcafptqjrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936282.9873846-243-225943905024261/AnsiballZ_blockinfile.py'
Jan 20 19:11:23 compute-0 sudo[138507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:23 compute-0 python3.9[138509]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:23 compute-0 sudo[138507]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.754416) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283754495, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1750, "num_deletes": 251, "total_data_size": 2460365, "memory_usage": 2510280, "flush_reason": "Manual Compaction"}
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283765819, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1453682, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7346, "largest_seqno": 9095, "table_properties": {"data_size": 1448050, "index_size": 2515, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 17063, "raw_average_key_size": 20, "raw_value_size": 1434400, "raw_average_value_size": 1757, "num_data_blocks": 118, "num_entries": 816, "num_filter_entries": 816, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936126, "oldest_key_time": 1768936126, "file_creation_time": 1768936283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 11461 microseconds, and 6163 cpu microseconds.
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.765887) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1453682 bytes OK
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.765910) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.767170) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.767189) EVENT_LOG_v1 {"time_micros": 1768936283767184, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.767212) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2452568, prev total WAL file size 2452568, number of live WAL files 2.
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.768150) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1419KB)], [20(7642KB)]
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283768247, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 9279940, "oldest_snapshot_seqno": -1}
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3432 keys, 7302639 bytes, temperature: kUnknown
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283818244, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 7302639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7276400, "index_size": 16529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8645, "raw_key_size": 81971, "raw_average_key_size": 23, "raw_value_size": 7211171, "raw_average_value_size": 2101, "num_data_blocks": 731, "num_entries": 3432, "num_filter_entries": 3432, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936283, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.818500) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 7302639 bytes
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.820799) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 185.3 rd, 145.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.4, 7.5 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(11.4) write-amplify(5.0) OK, records in: 3874, records dropped: 442 output_compression: NoCompression
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.820822) EVENT_LOG_v1 {"time_micros": 1768936283820810, "job": 6, "event": "compaction_finished", "compaction_time_micros": 50072, "compaction_time_cpu_micros": 16120, "output_level": 6, "num_output_files": 1, "total_output_size": 7302639, "num_input_records": 3874, "num_output_records": 3432, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283821208, "job": 6, "event": "table_file_deletion", "file_number": 22}
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936283822773, "job": 6, "event": "table_file_deletion", "file_number": 20}
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.768002) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822841) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822849) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822852) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:11:23 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:11:23.822859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:11:24 compute-0 sudo[138659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkrlhyodcrmlqccnlvxqwhvvsdiymght ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936283.8013387-252-275311482329480/AnsiballZ_command.py'
Jan 20 19:11:24 compute-0 sudo[138659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:24 compute-0 python3.9[138661]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:11:24 compute-0 sudo[138659]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:24 compute-0 sudo[138812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxlxkksjxnkalqwfpgziwgzmbpyqkzkz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936284.3572376-260-200442519590074/AnsiballZ_stat.py'
Jan 20 19:11:24 compute-0 sudo[138812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:24 compute-0 python3.9[138814]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:11:24 compute-0 ceph-mon[75120]: pgmap v379: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:24 compute-0 sudo[138812]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:25 compute-0 sudo[138966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hslkvqpzrzmxhazzbszcmygoiuaufzps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936284.9638617-268-148103530049658/AnsiballZ_command.py'
Jan 20 19:11:25 compute-0 sudo[138966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:25 compute-0 python3.9[138968]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:11:25 compute-0 sudo[138966]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:25 compute-0 sudo[139121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfvuemklpiqxhjcsowyepxkusgwsrjgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936285.5695183-276-157937181459177/AnsiballZ_file.py'
Jan 20 19:11:25 compute-0 sudo[139121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:25 compute-0 ceph-mon[75120]: pgmap v380: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:26 compute-0 python3.9[139123]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:26 compute-0 sudo[139121]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:27 compute-0 python3.9[139273]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:11:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:27 compute-0 sudo[139424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvvyncjkvpgrsrhuvuxnnzgvmtdmnpsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936287.6294959-316-137195992857653/AnsiballZ_command.py'
Jan 20 19:11:27 compute-0 sudo[139424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:28 compute-0 python3.9[139426]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:1e:0a:8d:1d:08:09" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:11:28 compute-0 ovs-vsctl[139427]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:1e:0a:8d:1d:08:09 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 20 19:11:28 compute-0 sudo[139424]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:28 compute-0 sudo[139577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-czkztnsyopubgpvequdtaludgtpnmnro ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936288.3077984-325-18625108626196/AnsiballZ_command.py'
Jan 20 19:11:28 compute-0 sudo[139577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:28 compute-0 python3.9[139579]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ovs-vsctl show | grep -q "Manager"
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:11:28 compute-0 sudo[139577]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:28 compute-0 ceph-mon[75120]: pgmap v381: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:29 compute-0 sudo[139732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwiymcxmbjdftldzjhyjrwdxgveatlxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936288.8820322-333-121987099018667/AnsiballZ_command.py'
Jan 20 19:11:29 compute-0 sudo[139732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:29 compute-0 python3.9[139734]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:11:29 compute-0 ovs-vsctl[139735]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 20 19:11:29 compute-0 sudo[139732]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:29 compute-0 python3.9[139885]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:11:30 compute-0 sudo[140037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzfdivfmaymyichxhilaroftkgartzhv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936290.106266-350-169788447063202/AnsiballZ_file.py'
Jan 20 19:11:30 compute-0 sudo[140037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:30 compute-0 python3.9[140039]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:11:30 compute-0 sudo[140037]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:30 compute-0 ceph-mon[75120]: pgmap v382: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:30 compute-0 sudo[140189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oftozmkzgzwmojpfgvreedocxfjwulke ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936290.6967437-358-157346912315939/AnsiballZ_stat.py'
Jan 20 19:11:30 compute-0 sudo[140189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:31 compute-0 python3.9[140191]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:31 compute-0 sudo[140189]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:31 compute-0 sudo[140267]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blomeyozeyyurhkmyhokajtxiearkbhw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936290.6967437-358-157346912315939/AnsiballZ_file.py'
Jan 20 19:11:31 compute-0 sudo[140267]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:11:31
Jan 20 19:11:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:11:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:11:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'images', 'default.rgw.log', 'vms', 'default.rgw.meta', 'default.rgw.control', 'backups']
Jan 20 19:11:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:11:31 compute-0 python3.9[140269]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:11:31 compute-0 sudo[140267]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:31 compute-0 sudo[140419]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcxpdnbtektupggtxkslpvouhpfqozyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936291.7266893-358-236522330993030/AnsiballZ_stat.py'
Jan 20 19:11:31 compute-0 sudo[140419]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:32 compute-0 python3.9[140421]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:32 compute-0 sudo[140419]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:32 compute-0 sudo[140497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxrihedwntkajxpxkkfsestqvqniiwzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936291.7266893-358-236522330993030/AnsiballZ_file.py'
Jan 20 19:11:32 compute-0 sudo[140497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:32 compute-0 python3.9[140499]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:11:32 compute-0 sudo[140497]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:32 compute-0 ceph-mon[75120]: pgmap v383: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:32 compute-0 sudo[140649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufwfrsqbcnyxhvfpzfeoicfpnltvzdpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936292.7321343-381-13561702528937/AnsiballZ_file.py'
Jan 20 19:11:32 compute-0 sudo[140649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:33 compute-0 python3.9[140651]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:33 compute-0 sudo[140649]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:33 compute-0 sudo[140801]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzxkmvhathpuqaleldwuxczvhmmamxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936293.3137639-389-136909314195449/AnsiballZ_stat.py'
Jan 20 19:11:33 compute-0 sudo[140801]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:33 compute-0 python3.9[140803]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:33 compute-0 sudo[140801]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:33 compute-0 ceph-mon[75120]: pgmap v384: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:33 compute-0 sudo[140879]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qpfsxahmizobzncsgwyuduyntiboamyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936293.3137639-389-136909314195449/AnsiballZ_file.py'
Jan 20 19:11:33 compute-0 sudo[140879]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:34 compute-0 python3.9[140881]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:34 compute-0 sudo[140879]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:11:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:11:34 compute-0 sudo[141031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fobyifeackzbujuqhxvubwuseoilurod ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936294.3491287-401-87983934385062/AnsiballZ_stat.py'
Jan 20 19:11:34 compute-0 sudo[141031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:34 compute-0 python3.9[141033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:34 compute-0 sudo[141031]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:35 compute-0 sudo[141109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrxqcbxhiiljkknpdbbfiimivggoybvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936294.3491287-401-87983934385062/AnsiballZ_file.py'
Jan 20 19:11:35 compute-0 sudo[141109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:35 compute-0 python3.9[141111]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:35 compute-0 sudo[141109]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:35 compute-0 sudo[141261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmhhilmnwnjkdisbdswfcypggshiembi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936295.3567493-413-94075454439553/AnsiballZ_systemd.py'
Jan 20 19:11:35 compute-0 sudo[141261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:35 compute-0 python3.9[141263]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:11:35 compute-0 systemd[1]: Reloading.
Jan 20 19:11:36 compute-0 systemd-sysv-generator[141294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:11:36 compute-0 systemd-rc-local-generator[141291]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:11:36 compute-0 sudo[141261]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:36 compute-0 ceph-mon[75120]: pgmap v385: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:36 compute-0 sudo[141450]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oweeizstzgcfdqsauxqhiueadosrodnj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936296.5063128-421-260316261151183/AnsiballZ_stat.py'
Jan 20 19:11:36 compute-0 sudo[141450]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:37 compute-0 python3.9[141452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:37 compute-0 sudo[141450]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:37 compute-0 sudo[141528]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hksjqajpxfzztliyikmqpkrfpazqdcqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936296.5063128-421-260316261151183/AnsiballZ_file.py'
Jan 20 19:11:37 compute-0 sudo[141528]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:37 compute-0 python3.9[141530]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:37 compute-0 sudo[141528]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:37 compute-0 sudo[141680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzujiyyapywvkqidyblcqpytoejexgyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936297.5968032-433-177308040813113/AnsiballZ_stat.py'
Jan 20 19:11:37 compute-0 sudo[141680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:38 compute-0 python3.9[141682]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:38 compute-0 sudo[141680]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:38 compute-0 sudo[141758]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ytgxuyrfmpvlgyqkztkqlkxsxrjsusoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936297.5968032-433-177308040813113/AnsiballZ_file.py'
Jan 20 19:11:38 compute-0 sudo[141758]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:38 compute-0 python3.9[141760]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:38 compute-0 sudo[141758]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:38 compute-0 ceph-mon[75120]: pgmap v386: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:38 compute-0 sudo[141910]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pddtcgckspywxvsisgtwssyhzschrhdx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936298.6836803-445-124755564012263/AnsiballZ_systemd.py'
Jan 20 19:11:38 compute-0 sudo[141910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:39 compute-0 python3.9[141912]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:11:39 compute-0 systemd[1]: Reloading.
Jan 20 19:11:39 compute-0 systemd-sysv-generator[141941]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:11:39 compute-0 systemd-rc-local-generator[141936]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:11:39 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 19:11:39 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 19:11:39 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 19:11:39 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 19:11:39 compute-0 sudo[141910]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:40 compute-0 sudo[142103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaxdrrvaxxfqdtibyahitrrkigjznmxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936299.895333-455-180792299699028/AnsiballZ_file.py'
Jan 20 19:11:40 compute-0 sudo[142103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:40 compute-0 python3.9[142105]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:11:40 compute-0 sudo[142103]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:40 compute-0 sudo[142255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kkiulrwvisbsknqczocukkftcpghqdmd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936300.4849029-463-239350650588294/AnsiballZ_stat.py'
Jan 20 19:11:40 compute-0 sudo[142255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:40 compute-0 ceph-mon[75120]: pgmap v387: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:40 compute-0 python3.9[142257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:40 compute-0 sudo[142255]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:41 compute-0 sudo[142378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iorhulabfsssiwfvhdbtzmklvalulkjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936300.4849029-463-239350650588294/AnsiballZ_copy.py'
Jan 20 19:11:41 compute-0 sudo[142378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:41 compute-0 python3.9[142380]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936300.4849029-463-239350650588294/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:11:41 compute-0 sudo[142378]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:42 compute-0 sudo[142530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybanfxqcflrfnwlaqelhmcfuiqymkbvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936301.7780046-480-44571123608578/AnsiballZ_file.py'
Jan 20 19:11:42 compute-0 sudo[142530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:42 compute-0 python3.9[142532]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:42 compute-0 sudo[142530]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:42 compute-0 sudo[142682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umbfcoplffjfnnnevrlrsqgvaekilaqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936302.4595907-488-191982190555269/AnsiballZ_file.py'
Jan 20 19:11:42 compute-0 sudo[142682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:42 compute-0 ceph-mon[75120]: pgmap v388: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:42 compute-0 python3.9[142684]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:11:42 compute-0 sudo[142682]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:43 compute-0 sudo[142834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjadulxopnxoktkeapkwiqxclyoejzku ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936303.0507336-496-266932400009190/AnsiballZ_stat.py'
Jan 20 19:11:43 compute-0 sudo[142834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:43 compute-0 python3.9[142836]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:11:43 compute-0 sudo[142834]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:43 compute-0 sudo[142957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfhpppxadamrmkodlgqqkkaetihcxvpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936303.0507336-496-266932400009190/AnsiballZ_copy.py'
Jan 20 19:11:43 compute-0 sudo[142957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:44 compute-0 python3.9[142959]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936303.0507336-496-266932400009190/.source.json _original_basename=.2kujvss6 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:44 compute-0 sudo[142957]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:11:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:11:44 compute-0 python3.9[143109]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:44 compute-0 ceph-mon[75120]: pgmap v389: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:46 compute-0 ceph-mon[75120]: pgmap v390: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:46 compute-0 sudo[143530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvkkjobpxttwfbtfnzntavlfgbyzjhkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936306.1646194-536-2967760220452/AnsiballZ_container_config_data.py'
Jan 20 19:11:46 compute-0 sudo[143530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:46 compute-0 python3.9[143532]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 20 19:11:46 compute-0 sudo[143530]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:47 compute-0 sudo[143682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfethmydpbnawqgatxjcwyaimbzcqluy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936307.2449648-547-128397844043658/AnsiballZ_container_config_hash.py'
Jan 20 19:11:47 compute-0 sudo[143682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:47 compute-0 python3.9[143684]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 19:11:47 compute-0 sudo[143682]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:48 compute-0 sudo[143834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rcsgeghurboqpjuvmhtfzyzbvbbhxvjp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768936308.1675544-557-164751059106487/AnsiballZ_edpm_container_manage.py'
Jan 20 19:11:48 compute-0 sudo[143834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:48 compute-0 ceph-mon[75120]: pgmap v391: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:48 compute-0 python3[143836]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 19:11:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:50 compute-0 ceph-mon[75120]: pgmap v392: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:52 compute-0 ceph-mon[75120]: pgmap v393: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:54 compute-0 ceph-mon[75120]: pgmap v394: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:54 compute-0 podman[143849]: 2026-01-20 19:11:54.212977332 +0000 UTC m=+5.197244125 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 19:11:54 compute-0 sshd-session[143931]: Invalid user ethereum from 45.148.10.240 port 46806
Jan 20 19:11:54 compute-0 sshd-session[143931]: Connection closed by invalid user ethereum 45.148.10.240 port 46806 [preauth]
Jan 20 19:11:54 compute-0 podman[143971]: 2026-01-20 19:11:54.358798178 +0000 UTC m=+0.047243266 container create c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 19:11:54 compute-0 podman[143971]: 2026-01-20 19:11:54.335260472 +0000 UTC m=+0.023705590 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 19:11:54 compute-0 python3[143836]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 20 19:11:54 compute-0 sudo[143834]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:54 compute-0 sudo[144157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lnuilrcwayuhpesitlbwambacafkfmip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936314.7177413-565-81787165573600/AnsiballZ_stat.py'
Jan 20 19:11:54 compute-0 sudo[144157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:55 compute-0 python3.9[144159]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:11:55 compute-0 sudo[144157]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:55 compute-0 sudo[144311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbwzhaamcrhzddsdbkgzquxppwaxizet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936315.4110715-574-129430957828030/AnsiballZ_file.py'
Jan 20 19:11:55 compute-0 sudo[144311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:55 compute-0 python3.9[144313]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:55 compute-0 sudo[144311]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:56 compute-0 sudo[144387]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyvwqqbkynlciqjgztgeikydsdmgvkou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936315.4110715-574-129430957828030/AnsiballZ_stat.py'
Jan 20 19:11:56 compute-0 sudo[144387]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:56 compute-0 python3.9[144389]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:11:56 compute-0 sudo[144387]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:56 compute-0 sudo[144538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yavcnhuoroqkzzapqxqdgdevfodinghf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936316.3312128-574-239768728118749/AnsiballZ_copy.py'
Jan 20 19:11:56 compute-0 sudo[144538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:56 compute-0 ceph-mon[75120]: pgmap v395: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:56 compute-0 python3.9[144540]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768936316.3312128-574-239768728118749/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:11:56 compute-0 sudo[144538]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:57 compute-0 sudo[144614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztffjyszvrloziukpyvgphjvetuifjfq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936316.3312128-574-239768728118749/AnsiballZ_systemd.py'
Jan 20 19:11:57 compute-0 sudo[144614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:57 compute-0 python3.9[144616]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:11:57 compute-0 systemd[1]: Reloading.
Jan 20 19:11:57 compute-0 systemd-rc-local-generator[144644]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:11:57 compute-0 systemd-sysv-generator[144647]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:11:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:57 compute-0 sudo[144614]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:58 compute-0 sudo[144727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-offocyloqsdmanrqbzorvwrbmfeknyzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936316.3312128-574-239768728118749/AnsiballZ_systemd.py'
Jan 20 19:11:58 compute-0 sudo[144727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:11:58 compute-0 sshd-session[144652]: userauth_pubkey: signature algorithm ssh-rsa not in PubkeyAcceptedAlgorithms [preauth]
Jan 20 19:11:58 compute-0 python3.9[144729]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:11:58 compute-0 systemd[1]: Reloading.
Jan 20 19:11:58 compute-0 systemd-sysv-generator[144762]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:11:58 compute-0 systemd-rc-local-generator[144759]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:11:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:11:58 compute-0 ceph-mon[75120]: pgmap v396: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:58 compute-0 systemd[1]: Starting ovn_controller container...
Jan 20 19:11:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:11:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c442f5e904669fed25c3c9d2416fe551779526f820d8f46063b8f88c0556cc0f/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 20 19:11:59 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a.
Jan 20 19:11:59 compute-0 podman[144771]: 2026-01-20 19:11:59.112124458 +0000 UTC m=+0.216290809 container init c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + sudo -E kolla_set_configs
Jan 20 19:11:59 compute-0 podman[144771]: 2026-01-20 19:11:59.139210243 +0000 UTC m=+0.243376574 container start c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 20 19:11:59 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 20 19:11:59 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 20 19:11:59 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 20 19:11:59 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 20 19:11:59 compute-0 systemd[144806]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 20 19:11:59 compute-0 edpm-start-podman-container[144771]: ovn_controller
Jan 20 19:11:59 compute-0 edpm-start-podman-container[144770]: Creating additional drop-in dependency for "ovn_controller" (c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a)
Jan 20 19:11:59 compute-0 systemd[1]: Reloading.
Jan 20 19:11:59 compute-0 podman[144794]: 2026-01-20 19:11:59.29954768 +0000 UTC m=+0.150325339 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 20 19:11:59 compute-0 systemd[144806]: Queued start job for default target Main User Target.
Jan 20 19:11:59 compute-0 systemd[144806]: Created slice User Application Slice.
Jan 20 19:11:59 compute-0 systemd[144806]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 20 19:11:59 compute-0 systemd[144806]: Started Daily Cleanup of User's Temporary Directories.
Jan 20 19:11:59 compute-0 systemd[144806]: Reached target Paths.
Jan 20 19:11:59 compute-0 systemd[144806]: Reached target Timers.
Jan 20 19:11:59 compute-0 systemd[144806]: Starting D-Bus User Message Bus Socket...
Jan 20 19:11:59 compute-0 systemd[144806]: Starting Create User's Volatile Files and Directories...
Jan 20 19:11:59 compute-0 systemd-rc-local-generator[144870]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:11:59 compute-0 systemd-sysv-generator[144875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:11:59 compute-0 systemd[144806]: Finished Create User's Volatile Files and Directories.
Jan 20 19:11:59 compute-0 systemd[144806]: Listening on D-Bus User Message Bus Socket.
Jan 20 19:11:59 compute-0 systemd[144806]: Reached target Sockets.
Jan 20 19:11:59 compute-0 systemd[144806]: Reached target Basic System.
Jan 20 19:11:59 compute-0 systemd[144806]: Reached target Main User Target.
Jan 20 19:11:59 compute-0 systemd[144806]: Startup finished in 151ms.
Jan 20 19:11:59 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 20 19:11:59 compute-0 systemd[1]: Started ovn_controller container.
Jan 20 19:11:59 compute-0 systemd[1]: c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a-1001d1e7b577e2eb.service: Main process exited, code=exited, status=1/FAILURE
Jan 20 19:11:59 compute-0 systemd[1]: c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a-1001d1e7b577e2eb.service: Failed with result 'exit-code'.
Jan 20 19:11:59 compute-0 systemd[1]: Started Session c1 of User root.
Jan 20 19:11:59 compute-0 sudo[144727]: pam_unix(sudo:session): session closed for user root
Jan 20 19:11:59 compute-0 ovn_controller[144787]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 19:11:59 compute-0 ovn_controller[144787]: INFO:__main__:Validating config file
Jan 20 19:11:59 compute-0 ovn_controller[144787]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 19:11:59 compute-0 ovn_controller[144787]: INFO:__main__:Writing out command to execute
Jan 20 19:11:59 compute-0 ovn_controller[144787]: ++ cat /run_command
Jan 20 19:11:59 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + ARGS=
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + sudo kolla_copy_cacerts
Jan 20 19:11:59 compute-0 systemd[1]: Started Session c2 of User root.
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + [[ ! -n '' ]]
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + . kolla_extend_start
Jan 20 19:11:59 compute-0 ovn_controller[144787]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + umask 0022
Jan 20 19:11:59 compute-0 ovn_controller[144787]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 20 19:11:59 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <info>  [1768936319.7158] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <info>  [1768936319.7167] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <warn>  [1768936319.7169] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <info>  [1768936319.7175] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <info>  [1768936319.7180] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <info>  [1768936319.7183] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 20 19:11:59 compute-0 kernel: br-int: entered promiscuous mode
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00010|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00011|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00012|features|INFO|OVS Feature: ct_flush, state: supported
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00013|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00014|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00015|main|INFO|OVS feature set changed, force recompute.
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00016|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00019|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00020|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00022|main|INFO|OVS feature set changed, force recompute.
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00001|statctrl(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00002|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00003|rconn(ovn_statctrl2)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 19:11:59 compute-0 ovn_controller[144787]: 2026-01-20T19:11:59Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <info>  [1768936319.7332] manager: (ovn-beb8dd-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 20 19:11:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:11:59 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <info>  [1768936319.7524] device (genev_sys_6081): carrier: link connected
Jan 20 19:11:59 compute-0 NetworkManager[48913]: <info>  [1768936319.7527] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Jan 20 19:11:59 compute-0 systemd-udevd[144920]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:11:59 compute-0 systemd-udevd[144923]: Network interface NamePolicy= disabled on kernel command line.
Jan 20 19:12:00 compute-0 python3.9[145051]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 19:12:00 compute-0 ceph-mon[75120]: pgmap v397: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:00 compute-0 sudo[145201]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbwjhbiopijhtycxuveabgmmwwzalzii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936320.7322195-619-245181751861086/AnsiballZ_stat.py'
Jan 20 19:12:00 compute-0 sudo[145201]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:01 compute-0 python3.9[145203]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:01 compute-0 sudo[145201]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:01 compute-0 sudo[145324]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mocrxifobhugtjmcpqfypxmmzwryjmgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936320.7322195-619-245181751861086/AnsiballZ_copy.py'
Jan 20 19:12:01 compute-0 sudo[145324]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:01 compute-0 python3.9[145326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936320.7322195-619-245181751861086/.source.yaml _original_basename=.4fa6tlds follow=False checksum=8b5a37e67ac838beaa0c9af9ba2de80244d453f2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:01 compute-0 sudo[145324]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:01 compute-0 ceph-mon[75120]: pgmap v398: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:02 compute-0 sudo[145476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emhocgenarkxpkgwwfzdznlxvlelzvaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936321.8589582-634-104448326992856/AnsiballZ_command.py'
Jan 20 19:12:02 compute-0 sudo[145476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:02 compute-0 python3.9[145478]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:12:02 compute-0 ovs-vsctl[145479]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 20 19:12:02 compute-0 sudo[145476]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:02 compute-0 sudo[145629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgeprmdelqqubrczqjwydlbobszzlhqz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936322.4848444-642-240725505293411/AnsiballZ_command.py'
Jan 20 19:12:02 compute-0 sudo[145629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:02 compute-0 python3.9[145631]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:12:02 compute-0 ovs-vsctl[145633]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 20 19:12:02 compute-0 sudo[145629]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:03 compute-0 sudo[145784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zimzysfhcrkcxhkdbnilfgqoerjfmoct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936323.2788284-656-47188338608219/AnsiballZ_command.py'
Jan 20 19:12:03 compute-0 sudo[145784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:03 compute-0 python3.9[145786]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:12:03 compute-0 ovs-vsctl[145787]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 20 19:12:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:03 compute-0 sudo[145784]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:04 compute-0 sshd-session[133598]: Connection closed by 192.168.122.30 port 50988
Jan 20 19:12:04 compute-0 sshd-session[133595]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:12:04 compute-0 systemd[1]: session-46.scope: Deactivated successfully.
Jan 20 19:12:04 compute-0 systemd[1]: session-46.scope: Consumed 56.463s CPU time.
Jan 20 19:12:04 compute-0 systemd-logind[797]: Session 46 logged out. Waiting for processes to exit.
Jan 20 19:12:04 compute-0 systemd-logind[797]: Removed session 46.
Jan 20 19:12:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:04 compute-0 ceph-mon[75120]: pgmap v399: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:06 compute-0 ceph-mon[75120]: pgmap v400: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:12:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 2095 writes, 9262 keys, 2095 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s
                                           Cumulative WAL: 2095 writes, 2095 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 2095 writes, 9262 keys, 2095 commit groups, 1.0 writes per commit group, ingest: 12.36 MB, 0.02 MB/s
                                           Interval WAL: 2095 writes, 2095 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     93.5      0.10              0.02         3    0.032       0      0       0.0       0.0
                                             L6      1/0    6.96 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6    135.7    119.7      0.12              0.05         2    0.060    7222    732       0.0       0.0
                                            Sum      1/0    6.96 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     75.8    108.1      0.22              0.07         5    0.043    7222    732       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     88.4    125.7      0.19              0.07         4    0.046    7222    732       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    135.7    119.7      0.12              0.05         2    0.060    7222    732       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    136.8      0.06              0.02         2    0.032       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.009, interval 0.009
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.2 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55eae3cfb8d0#2 capacity: 308.00 MB usage: 620.72 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 5.5e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(37,529.69 KB,0.167946%) FilterBlock(6,27.86 KB,0.00883325%) IndexBlock(6,63.17 KB,0.0200296%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 19:12:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:07 compute-0 sshd-session[144652]: Connection closed by authenticating user root 139.19.117.131 port 54684 [preauth]
Jan 20 19:12:08 compute-0 sudo[145812]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:12:08 compute-0 sudo[145812]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:08 compute-0 sudo[145812]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:08 compute-0 sudo[145837]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:12:08 compute-0 sudo[145837]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:08 compute-0 sudo[145837]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:12:08 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:12:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:12:08 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:12:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:12:08 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:12:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:12:08 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:12:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:12:08 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:12:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:12:08 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:12:08 compute-0 sudo[145893]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:12:08 compute-0 sudo[145893]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:08 compute-0 sudo[145893]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:08 compute-0 sudo[145918]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:12:08 compute-0 sudo[145918]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:08 compute-0 ceph-mon[75120]: pgmap v401: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:12:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:12:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:12:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:12:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:12:08 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:12:09 compute-0 podman[145955]: 2026-01-20 19:12:09.02822951 +0000 UTC m=+0.040070497 container create 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:12:09 compute-0 systemd[1]: Started libpod-conmon-44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c.scope.
Jan 20 19:12:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:09 compute-0 podman[145955]: 2026-01-20 19:12:09.011614993 +0000 UTC m=+0.023456010 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:12:09 compute-0 podman[145955]: 2026-01-20 19:12:09.132218612 +0000 UTC m=+0.144059629 container init 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:12:09 compute-0 podman[145955]: 2026-01-20 19:12:09.138954422 +0000 UTC m=+0.150795409 container start 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:12:09 compute-0 podman[145955]: 2026-01-20 19:12:09.142792994 +0000 UTC m=+0.154634001 container attach 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default)
Jan 20 19:12:09 compute-0 festive_swirles[145971]: 167 167
Jan 20 19:12:09 compute-0 systemd[1]: libpod-44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c.scope: Deactivated successfully.
Jan 20 19:12:09 compute-0 podman[145955]: 2026-01-20 19:12:09.144802503 +0000 UTC m=+0.156643490 container died 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:12:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a15c62d4bf086e2b1e8a20878b67a3480226f98b01b85e54e525f5a45b8774b-merged.mount: Deactivated successfully.
Jan 20 19:12:09 compute-0 podman[145955]: 2026-01-20 19:12:09.192009619 +0000 UTC m=+0.203850606 container remove 44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_REF=tentacle)
Jan 20 19:12:09 compute-0 systemd[1]: libpod-conmon-44c680bcee5bf60fb82d66e41484c64d5172337b3bcab9285ac531552262420c.scope: Deactivated successfully.
Jan 20 19:12:09 compute-0 sshd-session[145991]: Accepted publickey for zuul from 192.168.122.30 port 37392 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:12:09 compute-0 systemd-logind[797]: New session 48 of user zuul.
Jan 20 19:12:09 compute-0 podman[145998]: 2026-01-20 19:12:09.350787548 +0000 UTC m=+0.043278583 container create 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:12:09 compute-0 systemd[1]: Started Session 48 of User zuul.
Jan 20 19:12:09 compute-0 sshd-session[145991]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:12:09 compute-0 systemd[1]: Started libpod-conmon-375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f.scope.
Jan 20 19:12:09 compute-0 podman[145998]: 2026-01-20 19:12:09.330738549 +0000 UTC m=+0.023229604 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:12:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:09 compute-0 podman[145998]: 2026-01-20 19:12:09.472876972 +0000 UTC m=+0.165368037 container init 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0)
Jan 20 19:12:09 compute-0 podman[145998]: 2026-01-20 19:12:09.48035556 +0000 UTC m=+0.172846595 container start 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:12:09 compute-0 podman[145998]: 2026-01-20 19:12:09.483976917 +0000 UTC m=+0.176467962 container attach 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:12:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:09 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 20 19:12:09 compute-0 systemd[144806]: Activating special unit Exit the Session...
Jan 20 19:12:09 compute-0 systemd[144806]: Stopped target Main User Target.
Jan 20 19:12:09 compute-0 systemd[144806]: Stopped target Basic System.
Jan 20 19:12:09 compute-0 systemd[144806]: Stopped target Paths.
Jan 20 19:12:09 compute-0 systemd[144806]: Stopped target Sockets.
Jan 20 19:12:09 compute-0 systemd[144806]: Stopped target Timers.
Jan 20 19:12:09 compute-0 systemd[144806]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 20 19:12:09 compute-0 systemd[144806]: Closed D-Bus User Message Bus Socket.
Jan 20 19:12:09 compute-0 systemd[144806]: Stopped Create User's Volatile Files and Directories.
Jan 20 19:12:09 compute-0 systemd[144806]: Removed slice User Application Slice.
Jan 20 19:12:09 compute-0 systemd[144806]: Reached target Shutdown.
Jan 20 19:12:09 compute-0 systemd[144806]: Finished Exit the Session.
Jan 20 19:12:09 compute-0 systemd[144806]: Reached target Exit the Session.
Jan 20 19:12:09 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 20 19:12:09 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 20 19:12:09 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 20 19:12:09 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 20 19:12:09 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 20 19:12:09 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 20 19:12:09 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 20 19:12:09 compute-0 distracted_greider[146018]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:12:09 compute-0 distracted_greider[146018]: --> All data devices are unavailable
Jan 20 19:12:09 compute-0 systemd[1]: libpod-375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f.scope: Deactivated successfully.
Jan 20 19:12:09 compute-0 podman[145998]: 2026-01-20 19:12:09.983473288 +0000 UTC m=+0.675964323 container died 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-e57ad9ac075e127c1de97ba2c071ebea9e15182c0bcd68e517522fd49f876515-merged.mount: Deactivated successfully.
Jan 20 19:12:10 compute-0 podman[145998]: 2026-01-20 19:12:10.035937019 +0000 UTC m=+0.728428054 container remove 375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_greider, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:12:10 compute-0 systemd[1]: libpod-conmon-375eccef64dfb37fb42fe6ee164e32cd5abb8b1cbbc9852863c97546e6c8926f.scope: Deactivated successfully.
Jan 20 19:12:10 compute-0 sudo[145918]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:10 compute-0 sudo[146199]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:12:10 compute-0 sudo[146199]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:10 compute-0 sudo[146199]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:10 compute-0 sudo[146225]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:12:10 compute-0 sudo[146225]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:10 compute-0 python3.9[146200]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:12:10 compute-0 podman[146263]: 2026-01-20 19:12:10.467446057 +0000 UTC m=+0.041297726 container create 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:12:10 compute-0 systemd[1]: Started libpod-conmon-4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492.scope.
Jan 20 19:12:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:10 compute-0 podman[146263]: 2026-01-20 19:12:10.450761079 +0000 UTC m=+0.024612758 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:12:10 compute-0 podman[146263]: 2026-01-20 19:12:10.548244686 +0000 UTC m=+0.122096375 container init 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 19:12:10 compute-0 podman[146263]: 2026-01-20 19:12:10.556634306 +0000 UTC m=+0.130485975 container start 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:12:10 compute-0 podman[146263]: 2026-01-20 19:12:10.559669579 +0000 UTC m=+0.133521248 container attach 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:12:10 compute-0 xenodochial_gates[146281]: 167 167
Jan 20 19:12:10 compute-0 systemd[1]: libpod-4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492.scope: Deactivated successfully.
Jan 20 19:12:10 compute-0 conmon[146281]: conmon 4de0e951e4d6f8546b19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492.scope/container/memory.events
Jan 20 19:12:10 compute-0 podman[146263]: 2026-01-20 19:12:10.562567448 +0000 UTC m=+0.136419117 container died 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:12:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-083b46913e28819c84f268e660a83acf8846858c6efb062e65d5d2879662afb3-merged.mount: Deactivated successfully.
Jan 20 19:12:10 compute-0 podman[146263]: 2026-01-20 19:12:10.612828317 +0000 UTC m=+0.186679976 container remove 4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=xenodochial_gates, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 19:12:10 compute-0 systemd[1]: libpod-conmon-4de0e951e4d6f8546b19dabd248596a4d49b189bc5ebeb89e92ca60ca638e492.scope: Deactivated successfully.
Jan 20 19:12:10 compute-0 podman[146329]: 2026-01-20 19:12:10.77383012 +0000 UTC m=+0.037618809 container create eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:12:10 compute-0 systemd[1]: Started libpod-conmon-eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb.scope.
Jan 20 19:12:10 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:10 compute-0 podman[146329]: 2026-01-20 19:12:10.851874562 +0000 UTC m=+0.115663271 container init eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:12:10 compute-0 podman[146329]: 2026-01-20 19:12:10.758083894 +0000 UTC m=+0.021872603 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:12:10 compute-0 podman[146329]: 2026-01-20 19:12:10.860196371 +0000 UTC m=+0.123985060 container start eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:12:10 compute-0 podman[146329]: 2026-01-20 19:12:10.864819691 +0000 UTC m=+0.128608390 container attach eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True)
Jan 20 19:12:10 compute-0 ceph-mon[75120]: pgmap v402: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:11 compute-0 eager_babbage[146350]: {
Jan 20 19:12:11 compute-0 eager_babbage[146350]:     "0": [
Jan 20 19:12:11 compute-0 eager_babbage[146350]:         {
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "devices": [
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "/dev/loop3"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             ],
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_name": "ceph_lv0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_size": "21470642176",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "name": "ceph_lv0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "tags": {
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cluster_name": "ceph",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.crush_device_class": "",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.encrypted": "0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.objectstore": "bluestore",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osd_id": "0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.type": "block",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.vdo": "0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.with_tpm": "0"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             },
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "type": "block",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "vg_name": "ceph_vg0"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:         }
Jan 20 19:12:11 compute-0 eager_babbage[146350]:     ],
Jan 20 19:12:11 compute-0 eager_babbage[146350]:     "1": [
Jan 20 19:12:11 compute-0 eager_babbage[146350]:         {
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "devices": [
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "/dev/loop4"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             ],
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_name": "ceph_lv1",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_size": "21470642176",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "name": "ceph_lv1",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "tags": {
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cluster_name": "ceph",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.crush_device_class": "",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.encrypted": "0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.objectstore": "bluestore",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osd_id": "1",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.type": "block",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.vdo": "0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.with_tpm": "0"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             },
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "type": "block",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "vg_name": "ceph_vg1"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:         }
Jan 20 19:12:11 compute-0 eager_babbage[146350]:     ],
Jan 20 19:12:11 compute-0 eager_babbage[146350]:     "2": [
Jan 20 19:12:11 compute-0 eager_babbage[146350]:         {
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "devices": [
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "/dev/loop5"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             ],
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_name": "ceph_lv2",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_size": "21470642176",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "name": "ceph_lv2",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "tags": {
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.cluster_name": "ceph",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.crush_device_class": "",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.encrypted": "0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.objectstore": "bluestore",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osd_id": "2",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.type": "block",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.vdo": "0",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:                 "ceph.with_tpm": "0"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             },
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "type": "block",
Jan 20 19:12:11 compute-0 eager_babbage[146350]:             "vg_name": "ceph_vg2"
Jan 20 19:12:11 compute-0 eager_babbage[146350]:         }
Jan 20 19:12:11 compute-0 eager_babbage[146350]:     ]
Jan 20 19:12:11 compute-0 eager_babbage[146350]: }
Jan 20 19:12:11 compute-0 systemd[1]: libpod-eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb.scope: Deactivated successfully.
Jan 20 19:12:11 compute-0 podman[146329]: 2026-01-20 19:12:11.146712768 +0000 UTC m=+0.410501467 container died eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030)
Jan 20 19:12:11 compute-0 systemd[1]: var-lib-containers-storage-overlay-62f4a81df92b3d8e9d06a9b8d5c2b257cf71dd564716a559f9fe1e92dda0a330-merged.mount: Deactivated successfully.
Jan 20 19:12:11 compute-0 podman[146329]: 2026-01-20 19:12:11.192923361 +0000 UTC m=+0.456712060 container remove eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_babbage, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:12:11 compute-0 systemd[1]: libpod-conmon-eb880a322292276285628ffc073fb7e098779c1933166a3c467ec63c7fb433cb.scope: Deactivated successfully.
Jan 20 19:12:11 compute-0 sudo[146225]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:11 compute-0 sudo[146494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whqxwnrpvicrikaogsgfhqibbacvmsoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936330.8427732-29-121413631367907/AnsiballZ_file.py'
Jan 20 19:12:11 compute-0 sudo[146494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:11 compute-0 sudo[146497]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:12:11 compute-0 sudo[146497]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:11 compute-0 sudo[146497]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:11 compute-0 sudo[146522]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:12:11 compute-0 sudo[146522]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:11 compute-0 python3.9[146496]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:11 compute-0 sudo[146494]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:11 compute-0 podman[146591]: 2026-01-20 19:12:11.586691189 +0000 UTC m=+0.024745071 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:12:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:11 compute-0 sudo[146721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-truzaurznjnietbdtnclqlwskhuymtln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936331.5697439-29-190557190213563/AnsiballZ_file.py'
Jan 20 19:12:11 compute-0 sudo[146721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:12 compute-0 podman[146591]: 2026-01-20 19:12:12.117667441 +0000 UTC m=+0.555721303 container create 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 19:12:12 compute-0 ceph-mon[75120]: pgmap v403: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:12 compute-0 systemd[1]: Started libpod-conmon-5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7.scope.
Jan 20 19:12:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:12 compute-0 podman[146591]: 2026-01-20 19:12:12.231551339 +0000 UTC m=+0.669605221 container init 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:12:12 compute-0 podman[146591]: 2026-01-20 19:12:12.237915911 +0000 UTC m=+0.675969763 container start 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:12:12 compute-0 podman[146591]: 2026-01-20 19:12:12.241414804 +0000 UTC m=+0.679468666 container attach 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 19:12:12 compute-0 funny_napier[146731]: 167 167
Jan 20 19:12:12 compute-0 systemd[1]: libpod-5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7.scope: Deactivated successfully.
Jan 20 19:12:12 compute-0 podman[146591]: 2026-01-20 19:12:12.243185387 +0000 UTC m=+0.681239249 container died 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:12:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-c682ecb244187bcc1eb163863e12a1f7d3d6ca20073b62e702fc50065e67f573-merged.mount: Deactivated successfully.
Jan 20 19:12:12 compute-0 podman[146591]: 2026-01-20 19:12:12.278569591 +0000 UTC m=+0.716623453 container remove 5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=funny_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:12:12 compute-0 python3.9[146723]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:12 compute-0 systemd[1]: libpod-conmon-5485dbc7eb13906d256332ee78beda80d8679ba81d2ed3e0392e91b3f5584cb7.scope: Deactivated successfully.
Jan 20 19:12:12 compute-0 sudo[146721]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:12 compute-0 podman[146779]: 2026-01-20 19:12:12.42561307 +0000 UTC m=+0.038814367 container create dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:12:12 compute-0 systemd[1]: Started libpod-conmon-dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4.scope.
Jan 20 19:12:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:12 compute-0 podman[146779]: 2026-01-20 19:12:12.409760591 +0000 UTC m=+0.022961908 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:12:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:12:12 compute-0 podman[146779]: 2026-01-20 19:12:12.51567861 +0000 UTC m=+0.128879927 container init dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:12:12 compute-0 podman[146779]: 2026-01-20 19:12:12.523495216 +0000 UTC m=+0.136696513 container start dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:12:12 compute-0 podman[146779]: 2026-01-20 19:12:12.527720457 +0000 UTC m=+0.140921754 container attach dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:12:12 compute-0 sudo[146926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qohaeyoahwggaegutwcjjaalitbsmxyi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936332.421413-29-248689512916912/AnsiballZ_file.py'
Jan 20 19:12:12 compute-0 sudo[146926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:12 compute-0 python3.9[146928]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:12 compute-0 sudo[146926]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:13 compute-0 lvm[147102]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:12:13 compute-0 lvm[147102]: VG ceph_vg0 finished
Jan 20 19:12:13 compute-0 lvm[147108]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:12:13 compute-0 lvm[147108]: VG ceph_vg1 finished
Jan 20 19:12:13 compute-0 lvm[147128]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:12:13 compute-0 lvm[147128]: VG ceph_vg2 finished
Jan 20 19:12:13 compute-0 sudo[147155]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdvooklwjgehrsjhmsagqypovolbmfdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936333.0178719-29-277319635049990/AnsiballZ_file.py'
Jan 20 19:12:13 compute-0 sudo[147155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:13 compute-0 lucid_chaplygin[146842]: {}
Jan 20 19:12:13 compute-0 systemd[1]: libpod-dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4.scope: Deactivated successfully.
Jan 20 19:12:13 compute-0 podman[146779]: 2026-01-20 19:12:13.288757049 +0000 UTC m=+0.901958356 container died dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:12:13 compute-0 systemd[1]: libpod-dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4.scope: Consumed 1.244s CPU time.
Jan 20 19:12:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-d810e4db007c82d9e1f9e8f277cbce729f95c9eaa20b86dd99239cce099287a9-merged.mount: Deactivated successfully.
Jan 20 19:12:13 compute-0 podman[146779]: 2026-01-20 19:12:13.363120434 +0000 UTC m=+0.976321731 container remove dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=lucid_chaplygin, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 19:12:13 compute-0 systemd[1]: libpod-conmon-dcf82a6ac3b572dea5dd0a1df272fafa65e7b0a1c0569e1a4fcb29a3c4d599f4.scope: Deactivated successfully.
Jan 20 19:12:13 compute-0 sudo[146522]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:12:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:12:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:12:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:12:13 compute-0 python3.9[147158]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:13 compute-0 sudo[147155]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:13 compute-0 sudo[147173]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:12:13 compute-0 sudo[147173]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:12:13 compute-0 sudo[147173]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:13 compute-0 sudo[147347]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivhcuxnzuxhywihhgewasjebddjkbyvc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936333.565216-29-153780697617336/AnsiballZ_file.py'
Jan 20 19:12:13 compute-0 sudo[147347]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:13 compute-0 python3.9[147349]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:13 compute-0 sudo[147347]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:12:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:12:14 compute-0 ceph-mon[75120]: pgmap v404: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.434622) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334434731, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 644, "num_deletes": 251, "total_data_size": 770913, "memory_usage": 783592, "flush_reason": "Manual Compaction"}
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334443641, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 764202, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9096, "largest_seqno": 9739, "table_properties": {"data_size": 760823, "index_size": 1287, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7522, "raw_average_key_size": 18, "raw_value_size": 753990, "raw_average_value_size": 1852, "num_data_blocks": 60, "num_entries": 407, "num_filter_entries": 407, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936283, "oldest_key_time": 1768936283, "file_creation_time": 1768936334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 9120 microseconds, and 2764 cpu microseconds.
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.443749) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 764202 bytes OK
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.443793) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.446521) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.446541) EVENT_LOG_v1 {"time_micros": 1768936334446535, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.446589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 767474, prev total WAL file size 794290, number of live WAL files 2.
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.447497) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(746KB)], [23(7131KB)]
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334447572, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 8066841, "oldest_snapshot_seqno": -1}
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3325 keys, 6259179 bytes, temperature: kUnknown
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334504040, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6259179, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6235183, "index_size": 14607, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8325, "raw_key_size": 80568, "raw_average_key_size": 24, "raw_value_size": 6173283, "raw_average_value_size": 1856, "num_data_blocks": 636, "num_entries": 3325, "num_filter_entries": 3325, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936334, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.504300) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6259179 bytes
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.512345) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 142.7 rd, 110.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 7.0 +0.0 blob) out(6.0 +0.0 blob), read-write-amplify(18.7) write-amplify(8.2) OK, records in: 3839, records dropped: 514 output_compression: NoCompression
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.512405) EVENT_LOG_v1 {"time_micros": 1768936334512388, "job": 8, "event": "compaction_finished", "compaction_time_micros": 56546, "compaction_time_cpu_micros": 14144, "output_level": 6, "num_output_files": 1, "total_output_size": 6259179, "num_input_records": 3839, "num_output_records": 3325, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334512701, "job": 8, "event": "table_file_deletion", "file_number": 25}
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936334514002, "job": 8, "event": "table_file_deletion", "file_number": 23}
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.447313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514108) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514116) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514118) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514120) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:14 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:12:14.514124) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:12:14 compute-0 python3.9[147499]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:12:15 compute-0 sudo[147649]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nchpwvzfumjgjxtipxlgswesspwflsqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936334.8137143-73-277799328670130/AnsiballZ_seboolean.py'
Jan 20 19:12:15 compute-0 sudo[147649]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:15 compute-0 python3.9[147651]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 20 19:12:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:16 compute-0 sudo[147649]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:16 compute-0 python3.9[147801]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:17 compute-0 ceph-mon[75120]: pgmap v405: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:17 compute-0 python3.9[147922]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936336.2239652-81-280257201345664/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:18 compute-0 python3.9[148073]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:18 compute-0 ceph-mon[75120]: pgmap v406: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:18 compute-0 python3.9[148194]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936338.0199502-96-210961827006769/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:19 compute-0 sudo[148344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aevdmosaxkaurbkbcmdgrpesprtuvpwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936339.1926072-113-36116536145943/AnsiballZ_setup.py'
Jan 20 19:12:19 compute-0 sudo[148344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:19 compute-0 python3.9[148346]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:12:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:19 compute-0 ceph-mon[75120]: pgmap v407: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:19 compute-0 sudo[148344]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:20 compute-0 sudo[148428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypzsjeozzpvkiczkqkdtogmfavjrmbdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936339.1926072-113-36116536145943/AnsiballZ_dnf.py'
Jan 20 19:12:20 compute-0 sudo[148428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:20 compute-0 python3.9[148430]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:12:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:22 compute-0 ceph-mon[75120]: pgmap v408: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:22 compute-0 sudo[148428]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:23 compute-0 sudo[148581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qghavficgunvvbvcjgckhtfvbnwsqucw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936342.5611668-125-277307073662213/AnsiballZ_systemd.py'
Jan 20 19:12:23 compute-0 sudo[148581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:23 compute-0 python3.9[148583]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 19:12:23 compute-0 sudo[148581]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:24 compute-0 python3.9[148736]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:24 compute-0 python3.9[148857]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936343.5993252-133-156832045639515/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:24 compute-0 ceph-mon[75120]: pgmap v409: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:25 compute-0 python3.9[149007]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:25 compute-0 python3.9[149128]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936344.633798-133-129267100243576/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:26 compute-0 python3.9[149278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:26 compute-0 ceph-mon[75120]: pgmap v410: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:27 compute-0 python3.9[149399]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936346.2629704-177-203112403480213/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:27 compute-0 python3.9[149549]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:28 compute-0 python3.9[149670]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936347.3796203-177-95321056905441/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:28 compute-0 ceph-mon[75120]: pgmap v411: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:28 compute-0 python3.9[149820]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:12:29 compute-0 sudo[149972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isoozjvsraxxbhmljueufloerrstgzhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936349.1554158-215-106157203323323/AnsiballZ_file.py'
Jan 20 19:12:29 compute-0 sudo[149972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:29 compute-0 python3.9[149974]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:29 compute-0 sudo[149972]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:30 compute-0 sudo[150138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftgbakyxrouwyfyqrezebwbhvbhxyimh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936349.8172517-223-274729776339531/AnsiballZ_stat.py'
Jan 20 19:12:30 compute-0 ovn_controller[144787]: 2026-01-20T19:12:30Z|00025|memory|INFO|16128 kB peak resident set size after 30.4 seconds
Jan 20 19:12:30 compute-0 ovn_controller[144787]: 2026-01-20T19:12:30Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 20 19:12:30 compute-0 sudo[150138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:30 compute-0 podman[150098]: 2026-01-20 19:12:30.130970111 +0000 UTC m=+0.095599542 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 19:12:30 compute-0 python3.9[150146]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:30 compute-0 sudo[150138]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:30 compute-0 sudo[150228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdifdogyrzcocolgvyocjjkedncjdjar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936349.8172517-223-274729776339531/AnsiballZ_file.py'
Jan 20 19:12:30 compute-0 sudo[150228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:30 compute-0 python3.9[150230]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:30 compute-0 sudo[150228]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:30 compute-0 ceph-mon[75120]: pgmap v412: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:31 compute-0 sudo[150380]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lckdnivcfdqqnbawqkezugvbarltpmon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936350.80403-223-111879692989550/AnsiballZ_stat.py'
Jan 20 19:12:31 compute-0 sudo[150380]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:31 compute-0 python3.9[150382]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:31 compute-0 sudo[150380]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:31 compute-0 sudo[150458]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntnjgwjhmrauobjnianoudiuoavgttcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936350.80403-223-111879692989550/AnsiballZ_file.py'
Jan 20 19:12:31 compute-0 sudo[150458]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:12:31
Jan 20 19:12:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:12:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:12:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.log', 'backups', 'vms', 'volumes', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', '.mgr', 'default.rgw.meta']
Jan 20 19:12:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:12:31 compute-0 python3.9[150460]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:31 compute-0 sudo[150458]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:32 compute-0 sudo[150610]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cynlxkqjcutptkdpuexxzsctzwvktwoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936351.7878807-246-225855636730218/AnsiballZ_file.py'
Jan 20 19:12:32 compute-0 sudo[150610]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:32 compute-0 python3.9[150612]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:32 compute-0 sudo[150610]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:32 compute-0 sudo[150762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zgeonarfupqclwgrsszgazxsrpygvwjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936352.372178-254-47557510345050/AnsiballZ_stat.py'
Jan 20 19:12:32 compute-0 sudo[150762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:32 compute-0 python3.9[150764]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:32 compute-0 sudo[150762]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:32 compute-0 ceph-mon[75120]: pgmap v413: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:33 compute-0 sudo[150840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvsenvicmbhuhqesgfzfaxwmrpazxmyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936352.372178-254-47557510345050/AnsiballZ_file.py'
Jan 20 19:12:33 compute-0 sudo[150840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:33 compute-0 python3.9[150842]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:33 compute-0 sudo[150840]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:33 compute-0 sudo[150992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tovglesgeapmklrxhjyhejvvkhccennt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936353.3903239-266-262286029731932/AnsiballZ_stat.py'
Jan 20 19:12:33 compute-0 sudo[150992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:33 compute-0 python3.9[150994]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:33 compute-0 sudo[150992]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:33 compute-0 ceph-mon[75120]: pgmap v414: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:34 compute-0 sudo[151070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuvdsvhyimdfoejfodauxzuftfiqxuvd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936353.3903239-266-262286029731932/AnsiballZ_file.py'
Jan 20 19:12:34 compute-0 sudo[151070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:34 compute-0 python3.9[151072]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:34 compute-0 sudo[151070]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:12:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:12:34 compute-0 sudo[151222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gncglivqidjwvmvzeunizgxfhknjzvvv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936354.4448276-278-168346533431461/AnsiballZ_systemd.py'
Jan 20 19:12:34 compute-0 sudo[151222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:34 compute-0 python3.9[151224]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:12:35 compute-0 systemd[1]: Reloading.
Jan 20 19:12:35 compute-0 systemd-rc-local-generator[151253]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:12:35 compute-0 systemd-sysv-generator[151256]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:12:35 compute-0 sudo[151222]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:35 compute-0 sudo[151412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zafepcxnsguvdczxfkdhudgdsqcooazy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936355.5103073-286-192131265435210/AnsiballZ_stat.py'
Jan 20 19:12:35 compute-0 sudo[151412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:36 compute-0 python3.9[151414]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:36 compute-0 sudo[151412]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:36 compute-0 sudo[151490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxkuiazsictbwibewafqkyxacavzzwkn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936355.5103073-286-192131265435210/AnsiballZ_file.py'
Jan 20 19:12:36 compute-0 sudo[151490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:36 compute-0 python3.9[151492]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:36 compute-0 sudo[151490]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:36 compute-0 ceph-mon[75120]: pgmap v415: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:36 compute-0 sudo[151642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfufwnsmcjabuikweoegwcydpeistdsl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936356.626005-298-254338892175150/AnsiballZ_stat.py'
Jan 20 19:12:36 compute-0 sudo[151642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:37 compute-0 python3.9[151644]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:37 compute-0 sudo[151642]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:37 compute-0 sudo[151720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quiumzxwopsweqewbydbaxpkzypuiamt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936356.626005-298-254338892175150/AnsiballZ_file.py'
Jan 20 19:12:37 compute-0 sudo[151720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:37 compute-0 python3.9[151722]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:37 compute-0 sudo[151720]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:37 compute-0 ceph-mon[75120]: pgmap v416: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:38 compute-0 sudo[151872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aelmryesnvjocycrfzescfefqjxddjvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936357.7967951-310-20306196587342/AnsiballZ_systemd.py'
Jan 20 19:12:38 compute-0 sudo[151872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:38 compute-0 python3.9[151874]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:12:38 compute-0 systemd[1]: Reloading.
Jan 20 19:12:38 compute-0 systemd-rc-local-generator[151901]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:12:38 compute-0 systemd-sysv-generator[151905]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:12:38 compute-0 systemd[1]: Starting Create netns directory...
Jan 20 19:12:38 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 20 19:12:38 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 20 19:12:38 compute-0 systemd[1]: Finished Create netns directory.
Jan 20 19:12:38 compute-0 sudo[151872]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:39 compute-0 sudo[152064]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trupkeiyyifgqfwyqgzvbkbrmlmlqhdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936358.941716-320-179996580548722/AnsiballZ_file.py'
Jan 20 19:12:39 compute-0 sudo[152064]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:39 compute-0 python3.9[152066]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:39 compute-0 sudo[152064]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:39 compute-0 sudo[152216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilfeyewifrahoyriagjmmofhwsnhxnko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936359.560953-328-127661444211043/AnsiballZ_stat.py'
Jan 20 19:12:39 compute-0 sudo[152216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:40 compute-0 python3.9[152218]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:40 compute-0 sudo[152216]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:40 compute-0 sudo[152339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkaiglkbsnastyucqfueppsgncpymiqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936359.560953-328-127661444211043/AnsiballZ_copy.py'
Jan 20 19:12:40 compute-0 sudo[152339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:40 compute-0 python3.9[152341]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936359.560953-328-127661444211043/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:40 compute-0 sudo[152339]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:40 compute-0 ceph-mon[75120]: pgmap v417: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:41 compute-0 sudo[152491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxbxokbwnkvauchsfxzqrwphrhhgginp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936360.9802272-345-208506012365335/AnsiballZ_file.py'
Jan 20 19:12:41 compute-0 sudo[152491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:41 compute-0 python3.9[152493]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:41 compute-0 sudo[152491]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:41 compute-0 sudo[152643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xlbknnsjjzdvmrynslfxzkrijdhhcypn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936361.667344-353-75450882168547/AnsiballZ_file.py'
Jan 20 19:12:41 compute-0 sudo[152643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:42 compute-0 python3.9[152645]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:12:42 compute-0 sudo[152643]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:42 compute-0 sudo[152795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lflbvkdmrzabpinphdsomnfmepscuawa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936362.2793038-361-252779897983029/AnsiballZ_stat.py'
Jan 20 19:12:42 compute-0 sudo[152795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:42 compute-0 python3.9[152797]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:12:42 compute-0 sudo[152795]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:42 compute-0 ceph-mon[75120]: pgmap v418: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:43 compute-0 sudo[152918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmznqyxtovkfseaecplzbyufjrhtenuu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936362.2793038-361-252779897983029/AnsiballZ_copy.py'
Jan 20 19:12:43 compute-0 sudo[152918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:43 compute-0 python3.9[152920]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936362.2793038-361-252779897983029/.source.json _original_basename=.xkh550wa follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:43 compute-0 sudo[152918]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:43 compute-0 python3.9[153070]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:12:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:12:44 compute-0 ceph-mon[75120]: pgmap v419: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:45 compute-0 sudo[153491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qlaovvcmuxcwcnafgdikgffpqnlppcgo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936365.3360732-401-92386789448026/AnsiballZ_container_config_data.py'
Jan 20 19:12:45 compute-0 sudo[153491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:45 compute-0 python3.9[153493]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 20 19:12:45 compute-0 sudo[153491]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:46 compute-0 sudo[153643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ihmzbarxtlptdmnpvqhsqwkqdestxcve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936366.2259617-412-232768151144982/AnsiballZ_container_config_hash.py'
Jan 20 19:12:46 compute-0 sudo[153643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:46 compute-0 python3.9[153645]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 19:12:46 compute-0 ceph-mon[75120]: pgmap v420: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:46 compute-0 sudo[153643]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:47 compute-0 sudo[153795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weyseidgjctwwcqftmomlbxbfjdspizj ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768936367.1280365-422-70249922991155/AnsiballZ_edpm_container_manage.py'
Jan 20 19:12:47 compute-0 sudo[153795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:47 compute-0 python3[153797]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 19:12:47 compute-0 ceph-mon[75120]: pgmap v421: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:50 compute-0 ceph-mon[75120]: pgmap v422: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:52 compute-0 ceph-mon[75120]: pgmap v423: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:54 compute-0 ceph-mon[75120]: pgmap v424: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:56 compute-0 ceph-mon[75120]: pgmap v425: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:57 compute-0 podman[153811]: 2026-01-20 19:12:57.097305835 +0000 UTC m=+9.154866086 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:12:57 compute-0 podman[153952]: 2026-01-20 19:12:57.227921592 +0000 UTC m=+0.048202061 container create 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Jan 20 19:12:57 compute-0 podman[153952]: 2026-01-20 19:12:57.199940024 +0000 UTC m=+0.020220523 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:12:57 compute-0 python3[153797]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 20 19:12:57 compute-0 sudo[153795]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:57 compute-0 sudo[154138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lujinkqicetwngaahuyjsqxsadkefotb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936377.475051-430-31975898749529/AnsiballZ_stat.py'
Jan 20 19:12:57 compute-0 sudo[154138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:57 compute-0 python3.9[154140]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:12:57 compute-0 sudo[154138]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:58 compute-0 sudo[154292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezyzrovnedbrekigwmgywkvvbpdilzoq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936378.123861-439-143732791430177/AnsiballZ_file.py'
Jan 20 19:12:58 compute-0 sudo[154292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:58 compute-0 python3.9[154294]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:58 compute-0 sudo[154292]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:12:58 compute-0 sudo[154368]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tthaphfhdnodhrgatvslgcblhcicksit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936378.123861-439-143732791430177/AnsiballZ_stat.py'
Jan 20 19:12:58 compute-0 sudo[154368]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:58 compute-0 ceph-mon[75120]: pgmap v426: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:58 compute-0 python3.9[154370]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:12:58 compute-0 sudo[154368]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:59 compute-0 sudo[154519]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjpbzxollcvzxvhbtdivjnxjfyvjgiqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936379.047677-439-181628055902260/AnsiballZ_copy.py'
Jan 20 19:12:59 compute-0 sudo[154519]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:12:59 compute-0 python3.9[154521]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768936379.047677-439-181628055902260/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:12:59 compute-0 sudo[154519]: pam_unix(sudo:session): session closed for user root
Jan 20 19:12:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:12:59 compute-0 sudo[154595]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqhkjuoqtsyxzagdfdmskkgctgqwybyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936379.047677-439-181628055902260/AnsiballZ_systemd.py'
Jan 20 19:12:59 compute-0 sudo[154595]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:00 compute-0 python3.9[154597]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:13:00 compute-0 systemd[1]: Reloading.
Jan 20 19:13:00 compute-0 systemd-rc-local-generator[154639]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:13:00 compute-0 systemd-sysv-generator[154643]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:13:00 compute-0 podman[154599]: 2026-01-20 19:13:00.434907808 +0000 UTC m=+0.127244777 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 19:13:00 compute-0 sudo[154595]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:00 compute-0 sudo[154731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igibajlgenzpnwskddbddtvunxpzepqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936379.047677-439-181628055902260/AnsiballZ_systemd.py'
Jan 20 19:13:00 compute-0 sudo[154731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:00 compute-0 ceph-mon[75120]: pgmap v427: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:01 compute-0 python3.9[154733]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:13:01 compute-0 systemd[1]: Reloading.
Jan 20 19:13:01 compute-0 systemd-rc-local-generator[154761]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:13:01 compute-0 systemd-sysv-generator[154764]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:13:01 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 20 19:13:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b505a92993e7866c8202466dc589dba7160bca5ca9a37362c648ac2f6a55d590/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b505a92993e7866c8202466dc589dba7160bca5ca9a37362c648ac2f6a55d590/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:03 compute-0 ceph-mon[75120]: pgmap v428: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:03 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef.
Jan 20 19:13:03 compute-0 podman[154774]: 2026-01-20 19:13:03.24114014 +0000 UTC m=+1.516831560 container init 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + sudo -E kolla_set_configs
Jan 20 19:13:03 compute-0 podman[154774]: 2026-01-20 19:13:03.269117088 +0000 UTC m=+1.544808488 container start 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 19:13:03 compute-0 edpm-start-podman-container[154774]: ovn_metadata_agent
Jan 20 19:13:03 compute-0 edpm-start-podman-container[154773]: Creating additional drop-in dependency for "ovn_metadata_agent" (155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef)
Jan 20 19:13:03 compute-0 podman[154797]: 2026-01-20 19:13:03.338153765 +0000 UTC m=+0.055473604 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 20 19:13:03 compute-0 systemd[1]: Reloading.
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Validating config file
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Copying service configuration files
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Writing out command to execute
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: ++ cat /run_command
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + CMD=neutron-ovn-metadata-agent
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + ARGS=
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + sudo kolla_copy_cacerts
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + [[ ! -n '' ]]
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + . kolla_extend_start
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: Running command: 'neutron-ovn-metadata-agent'
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + umask 0022
Jan 20 19:13:03 compute-0 ovn_metadata_agent[154791]: + exec neutron-ovn-metadata-agent
Jan 20 19:13:03 compute-0 systemd-rc-local-generator[154870]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:13:03 compute-0 systemd-sysv-generator[154875]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:13:03 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 20 19:13:03 compute-0 sudo[154731]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:04 compute-0 ceph-mon[75120]: pgmap v429: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:04 compute-0 python3.9[155031]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 20 19:13:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:05 compute-0 sudo[155181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyfbjtseiybsqcjmgtywumopvwzisdlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936384.7960057-484-143695302364000/AnsiballZ_stat.py'
Jan 20 19:13:05 compute-0 sudo[155181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:05 compute-0 python3.9[155183]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:13:05 compute-0 sudo[155181]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.399 154796 INFO neutron.common.config [-] Logging enabled!
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.400 154796 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.400 154796 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.401 154796 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.402 154796 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.403 154796 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.404 154796 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.404 154796 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.404 154796 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.404 154796 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.405 154796 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.406 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.407 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.408 154796 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.409 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.410 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.411 154796 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.412 154796 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.413 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.414 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.415 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.416 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.417 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.418 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.419 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.420 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.421 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.422 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.423 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.424 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.425 154796 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.426 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.427 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.428 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.429 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.430 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.431 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.432 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.433 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.434 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.435 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.436 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.436 154796 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.436 154796 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.446 154796 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.459 154796 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 15f2b046-37e6-488b-9e52-3d187c798598 (UUID: 15f2b046-37e6-488b-9e52-3d187c798598) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.480 154796 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.480 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.480 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.480 154796 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.483 154796 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.488 154796 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.494 154796 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '15f2b046-37e6-488b-9e52-3d187c798598'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fb5fc7f0b80>], external_ids={}, name=15f2b046-37e6-488b-9e52-3d187c798598, nb_cfg_timestamp=1768936327737, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.495 154796 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fb5fc772c10>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.496 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.496 154796 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.496 154796 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.497 154796 INFO oslo_service.service [-] Starting 1 workers
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.501 154796 DEBUG oslo_service.service [-] Started child 155254 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.504 154796 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpqelov3wn/privsep.sock']
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.504 155254 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-957161'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.527 155254 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.528 155254 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.528 155254 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.532 155254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.539 155254 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 20 19:13:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:05.544 155254 INFO eventlet.wsgi.server [-] (155254) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 20 19:13:05 compute-0 sudo[155310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkngcnlqlbnshkozkwiyizwluatyjjmv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936384.7960057-484-143695302364000/AnsiballZ_copy.py'
Jan 20 19:13:05 compute-0 sudo[155310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:05 compute-0 python3.9[155312]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936384.7960057-484-143695302364000/.source.yaml _original_basename=.8d6izojx follow=False checksum=47c886dea5e425583a8c1699aae0fd4573459ba9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:05 compute-0 sudo[155310]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:06 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.197 154796 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.199 154796 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqelov3wn/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.047 155338 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.052 155338 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.054 155338 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.054 155338 INFO oslo.privsep.daemon [-] privsep daemon running as pid 155338
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.202 155338 DEBUG oslo.privsep.daemon [-] privsep: reply[c015d247-90c3-4336-8e16-f1e738c84e03]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:06 compute-0 sshd-session[146013]: Connection closed by 192.168.122.30 port 37392
Jan 20 19:13:06 compute-0 sshd-session[145991]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:13:06 compute-0 systemd[1]: session-48.scope: Deactivated successfully.
Jan 20 19:13:06 compute-0 systemd[1]: session-48.scope: Consumed 54.342s CPU time.
Jan 20 19:13:06 compute-0 systemd-logind[797]: Session 48 logged out. Waiting for processes to exit.
Jan 20 19:13:06 compute-0 systemd-logind[797]: Removed session 48.
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.770 155338 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.771 155338 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:13:06 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:06.771 155338 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:13:06 compute-0 ceph-mon[75120]: pgmap v430: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.370 155338 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5b48f1-9436-49bf-9323-90e90dae6ab4]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.373 154796 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=15f2b046-37e6-488b-9e52-3d187c798598, column=external_ids, values=({'neutron:ovn-metadata-id': '3059e1c8-eb87-5eb4-929e-9633646f5b0f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.380 154796 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=15f2b046-37e6-488b-9e52-3d187c798598, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.386 154796 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.387 154796 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.388 154796 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.389 154796 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.390 154796 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.391 154796 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.392 154796 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.393 154796 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.394 154796 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.395 154796 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.396 154796 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.397 154796 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.398 154796 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.399 154796 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.400 154796 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.401 154796 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.402 154796 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.403 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.404 154796 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.405 154796 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.406 154796 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.407 154796 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.408 154796 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.409 154796 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.410 154796 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.411 154796 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.412 154796 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.413 154796 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.414 154796 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.415 154796 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.416 154796 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.417 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.418 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.419 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.420 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:13:07 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:13:07.421 154796 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 19:13:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:09 compute-0 ceph-mon[75120]: pgmap v431: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:10 compute-0 ceph-mon[75120]: pgmap v432: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:11 compute-0 sshd-session[155343]: Accepted publickey for zuul from 192.168.122.30 port 59336 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:13:11 compute-0 systemd-logind[797]: New session 49 of user zuul.
Jan 20 19:13:11 compute-0 systemd[1]: Started Session 49 of User zuul.
Jan 20 19:13:11 compute-0 sshd-session[155343]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:13:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:12 compute-0 python3.9[155496]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:13:12 compute-0 ceph-mon[75120]: pgmap v433: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:13 compute-0 sudo[155623]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:13:13 compute-0 sudo[155623]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:13 compute-0 sudo[155623]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:13 compute-0 sudo[155676]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-faxzfsioalaxlqnqftmxgukulnkvcwxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936393.1589959-29-151612283692750/AnsiballZ_command.py'
Jan 20 19:13:13 compute-0 sudo[155676]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:13 compute-0 sudo[155675]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 20 19:13:13 compute-0 sudo[155675]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:13 compute-0 python3.9[155695]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:13 compute-0 sudo[155676]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:13 compute-0 sudo[155675]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:13:13 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:13:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:14 compute-0 sudo[155759]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:13:14 compute-0 sudo[155759]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:14 compute-0 sudo[155759]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:14 compute-0 sudo[155784]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:13:14 compute-0 sudo[155784]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:14 compute-0 sudo[155784]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:13:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:13:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:13:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:13:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:13:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:13:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:13:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:13:14 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:13:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:13:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:13:14 compute-0 sudo[155976]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-syjefjgdrupjtawqrfpomkjnyjoeijgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936394.1171868-40-58847467121664/AnsiballZ_systemd_service.py'
Jan 20 19:13:14 compute-0 sudo[155976]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:14 compute-0 sudo[155947]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:13:14 compute-0 sudo[155947]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:14 compute-0 sudo[155947]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:14 compute-0 sudo[155991]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:13:14 compute-0 sudo[155991]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:14 compute-0 ceph-mon[75120]: pgmap v434: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:13:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:13:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:13:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:13:14 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:13:15 compute-0 python3.9[155988]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:13:15 compute-0 systemd[1]: Reloading.
Jan 20 19:13:15 compute-0 podman[156028]: 2026-01-20 19:13:15.065432146 +0000 UTC m=+0.042395440 container create 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:13:15 compute-0 systemd-sysv-generator[156074]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:13:15 compute-0 systemd-rc-local-generator[156071]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:13:15 compute-0 podman[156028]: 2026-01-20 19:13:15.045104913 +0000 UTC m=+0.022068207 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:13:15 compute-0 systemd[1]: Started libpod-conmon-7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95.scope.
Jan 20 19:13:15 compute-0 sudo[155976]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:15 compute-0 podman[156028]: 2026-01-20 19:13:15.371541083 +0000 UTC m=+0.348504377 container init 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:13:15 compute-0 podman[156028]: 2026-01-20 19:13:15.378893537 +0000 UTC m=+0.355856811 container start 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:13:15 compute-0 podman[156028]: 2026-01-20 19:13:15.382981405 +0000 UTC m=+0.359944679 container attach 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:13:15 compute-0 suspicious_swartz[156079]: 167 167
Jan 20 19:13:15 compute-0 systemd[1]: libpod-7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95.scope: Deactivated successfully.
Jan 20 19:13:15 compute-0 podman[156028]: 2026-01-20 19:13:15.38697919 +0000 UTC m=+0.363942464 container died 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:13:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-1682b33082572893d537790bfd409fcd448243b9d85d419954a712d9b9eb5455-merged.mount: Deactivated successfully.
Jan 20 19:13:15 compute-0 podman[156028]: 2026-01-20 19:13:15.429699576 +0000 UTC m=+0.406662840 container remove 7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=suspicious_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:13:15 compute-0 systemd[1]: libpod-conmon-7a272586fd44a2a20e811e8cbb54fd0bacdbfec57dd16c054677de30c69b0d95.scope: Deactivated successfully.
Jan 20 19:13:15 compute-0 podman[156177]: 2026-01-20 19:13:15.594930319 +0000 UTC m=+0.041943309 container create 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:13:15 compute-0 systemd[1]: Started libpod-conmon-3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234.scope.
Jan 20 19:13:15 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:15 compute-0 podman[156177]: 2026-01-20 19:13:15.665179331 +0000 UTC m=+0.112192371 container init 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:13:15 compute-0 podman[156177]: 2026-01-20 19:13:15.57691355 +0000 UTC m=+0.023926560 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:13:15 compute-0 podman[156177]: 2026-01-20 19:13:15.672441275 +0000 UTC m=+0.119454285 container start 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 19:13:15 compute-0 podman[156177]: 2026-01-20 19:13:15.676103552 +0000 UTC m=+0.123116572 container attach 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:13:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:16 compute-0 python3.9[156274]: ansible-ansible.builtin.service_facts Invoked
Jan 20 19:13:16 compute-0 quirky_brown[156194]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:13:16 compute-0 quirky_brown[156194]: --> All data devices are unavailable
Jan 20 19:13:16 compute-0 network[156304]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 19:13:16 compute-0 network[156305]: 'network-scripts' will be removed from distribution in near future.
Jan 20 19:13:16 compute-0 network[156306]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 19:13:16 compute-0 systemd[1]: libpod-3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234.scope: Deactivated successfully.
Jan 20 19:13:16 compute-0 podman[156177]: 2026-01-20 19:13:16.18518745 +0000 UTC m=+0.632200490 container died 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:13:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-23ec80f33bd2ccb2bf8b53fa9663ea34f090acdf8a88d31ac8840364c1493e99-merged.mount: Deactivated successfully.
Jan 20 19:13:16 compute-0 podman[156177]: 2026-01-20 19:13:16.862546633 +0000 UTC m=+1.309559623 container remove 3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_brown, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:13:16 compute-0 systemd[1]: libpod-conmon-3662d38392392a42b1c2a002b7a2623c4d4e3aa8d72626f5d6b1a9635f515234.scope: Deactivated successfully.
Jan 20 19:13:16 compute-0 ceph-mon[75120]: pgmap v435: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:16 compute-0 sudo[155991]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:16 compute-0 sudo[156330]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:13:16 compute-0 sudo[156330]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:16 compute-0 sudo[156330]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:17 compute-0 sudo[156358]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:13:17 compute-0 sudo[156358]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:17 compute-0 podman[156411]: 2026-01-20 19:13:17.323129287 +0000 UTC m=+0.042667318 container create df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:13:17 compute-0 systemd[1]: Started libpod-conmon-df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7.scope.
Jan 20 19:13:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:17 compute-0 podman[156411]: 2026-01-20 19:13:17.399155876 +0000 UTC m=+0.118693947 container init df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:13:17 compute-0 podman[156411]: 2026-01-20 19:13:17.304322628 +0000 UTC m=+0.023860689 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:13:17 compute-0 podman[156411]: 2026-01-20 19:13:17.40690089 +0000 UTC m=+0.126438921 container start df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:13:17 compute-0 podman[156411]: 2026-01-20 19:13:17.410540247 +0000 UTC m=+0.130078308 container attach df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 19:13:17 compute-0 crazy_davinci[156431]: 167 167
Jan 20 19:13:17 compute-0 podman[156411]: 2026-01-20 19:13:17.412048103 +0000 UTC m=+0.131586134 container died df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:13:17 compute-0 systemd[1]: libpod-df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7.scope: Deactivated successfully.
Jan 20 19:13:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e31c9dac90acffbfb3e6a7b89c51be10935fca18ba38f0e2dfbbf185322ba6ea-merged.mount: Deactivated successfully.
Jan 20 19:13:17 compute-0 podman[156411]: 2026-01-20 19:13:17.450691153 +0000 UTC m=+0.170229184 container remove df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=crazy_davinci, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:13:17 compute-0 systemd[1]: libpod-conmon-df70ee0c772f1cb9e4ba097f9df7dfc147265bc3d3c79c38633bb163265e24c7.scope: Deactivated successfully.
Jan 20 19:13:17 compute-0 podman[156467]: 2026-01-20 19:13:17.607400433 +0000 UTC m=+0.044369367 container create 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:13:17 compute-0 systemd[1]: Started libpod-conmon-76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8.scope.
Jan 20 19:13:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:17 compute-0 podman[156467]: 2026-01-20 19:13:17.668967229 +0000 UTC m=+0.105936183 container init 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:13:17 compute-0 podman[156467]: 2026-01-20 19:13:17.676523038 +0000 UTC m=+0.113491972 container start 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:13:17 compute-0 podman[156467]: 2026-01-20 19:13:17.680020341 +0000 UTC m=+0.116989425 container attach 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:13:17 compute-0 podman[156467]: 2026-01-20 19:13:17.586812023 +0000 UTC m=+0.023780977 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:13:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:17 compute-0 zealous_brattain[156488]: {
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:     "0": [
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:         {
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "devices": [
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "/dev/loop3"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             ],
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_name": "ceph_lv0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_size": "21470642176",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "name": "ceph_lv0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "tags": {
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cluster_name": "ceph",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.crush_device_class": "",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.encrypted": "0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.objectstore": "bluestore",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osd_id": "0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.type": "block",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.vdo": "0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.with_tpm": "0"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             },
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "type": "block",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "vg_name": "ceph_vg0"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:         }
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:     ],
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:     "1": [
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:         {
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "devices": [
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "/dev/loop4"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             ],
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_name": "ceph_lv1",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_size": "21470642176",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "name": "ceph_lv1",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "tags": {
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cluster_name": "ceph",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.crush_device_class": "",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.encrypted": "0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.objectstore": "bluestore",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osd_id": "1",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.type": "block",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.vdo": "0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.with_tpm": "0"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             },
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "type": "block",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "vg_name": "ceph_vg1"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:         }
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:     ],
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:     "2": [
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:         {
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "devices": [
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "/dev/loop5"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             ],
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_name": "ceph_lv2",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_size": "21470642176",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "name": "ceph_lv2",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "tags": {
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.cluster_name": "ceph",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.crush_device_class": "",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.encrypted": "0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.objectstore": "bluestore",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osd_id": "2",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.type": "block",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.vdo": "0",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:                 "ceph.with_tpm": "0"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             },
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "type": "block",
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:             "vg_name": "ceph_vg2"
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:         }
Jan 20 19:13:17 compute-0 zealous_brattain[156488]:     ]
Jan 20 19:13:17 compute-0 zealous_brattain[156488]: }
Jan 20 19:13:17 compute-0 systemd[1]: libpod-76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8.scope: Deactivated successfully.
Jan 20 19:13:18 compute-0 podman[156500]: 2026-01-20 19:13:18.009429083 +0000 UTC m=+0.030263512 container died 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 19:13:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-e5ca6561e2d112906699e10f53a95b61198a16c7f3f3b2f7cbc472eaea3bd987-merged.mount: Deactivated successfully.
Jan 20 19:13:18 compute-0 podman[156500]: 2026-01-20 19:13:18.486480297 +0000 UTC m=+0.507314666 container remove 76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_brattain, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:13:18 compute-0 systemd[1]: libpod-conmon-76b896cb504b90916e07d7b7a85cc184c91aac22328178237a832af81682f2c8.scope: Deactivated successfully.
Jan 20 19:13:18 compute-0 sudo[156358]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:18 compute-0 sudo[156515]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:13:18 compute-0 sudo[156515]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:18 compute-0 sudo[156515]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:18 compute-0 sudo[156540]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:13:18 compute-0 sudo[156540]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:18 compute-0 ceph-mon[75120]: pgmap v436: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:18 compute-0 podman[156589]: 2026-01-20 19:13:18.9822693 +0000 UTC m=+0.043206770 container create a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:13:19 compute-0 systemd[1]: Started libpod-conmon-a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1.scope.
Jan 20 19:13:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:19 compute-0 podman[156589]: 2026-01-20 19:13:18.964491136 +0000 UTC m=+0.025428606 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:13:19 compute-0 podman[156589]: 2026-01-20 19:13:19.085758322 +0000 UTC m=+0.146695802 container init a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:13:19 compute-0 podman[156589]: 2026-01-20 19:13:19.09322127 +0000 UTC m=+0.154158720 container start a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:13:19 compute-0 jovial_bassi[156608]: 167 167
Jan 20 19:13:19 compute-0 systemd[1]: libpod-a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1.scope: Deactivated successfully.
Jan 20 19:13:19 compute-0 podman[156589]: 2026-01-20 19:13:19.123584672 +0000 UTC m=+0.184522122 container attach a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:13:19 compute-0 podman[156589]: 2026-01-20 19:13:19.124133166 +0000 UTC m=+0.185070616 container died a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:13:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9335d83330c4132ba8cf95f760e8a3ab7287836f7fb34e7e27337d3870d52ae-merged.mount: Deactivated successfully.
Jan 20 19:13:19 compute-0 podman[156589]: 2026-01-20 19:13:19.159573459 +0000 UTC m=+0.220510909 container remove a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=jovial_bassi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:13:19 compute-0 systemd[1]: libpod-conmon-a4afefbdcd880510d89603450e27868872d9ad8a76ab6f129bb59e8008885ed1.scope: Deactivated successfully.
Jan 20 19:13:19 compute-0 podman[156647]: 2026-01-20 19:13:19.33982901 +0000 UTC m=+0.047253226 container create 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:13:19 compute-0 systemd[1]: Started libpod-conmon-0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4.scope.
Jan 20 19:13:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:19 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:13:19 compute-0 podman[156647]: 2026-01-20 19:13:19.319453485 +0000 UTC m=+0.026877731 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:13:19 compute-0 podman[156647]: 2026-01-20 19:13:19.422944928 +0000 UTC m=+0.130369154 container init 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 19:13:19 compute-0 podman[156647]: 2026-01-20 19:13:19.431208155 +0000 UTC m=+0.138632371 container start 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:13:19 compute-0 podman[156647]: 2026-01-20 19:13:19.435069737 +0000 UTC m=+0.142493953 container attach 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:13:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:20 compute-0 lvm[156810]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:13:20 compute-0 lvm[156810]: VG ceph_vg1 finished
Jan 20 19:13:20 compute-0 lvm[156802]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:13:20 compute-0 lvm[156802]: VG ceph_vg0 finished
Jan 20 19:13:20 compute-0 lvm[156822]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:13:20 compute-0 lvm[156822]: VG ceph_vg2 finished
Jan 20 19:13:20 compute-0 quirky_payne[156668]: {}
Jan 20 19:13:20 compute-0 systemd[1]: libpod-0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4.scope: Deactivated successfully.
Jan 20 19:13:20 compute-0 systemd[1]: libpod-0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4.scope: Consumed 1.324s CPU time.
Jan 20 19:13:20 compute-0 podman[156647]: 2026-01-20 19:13:20.267907331 +0000 UTC m=+0.975331567 container died 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:13:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-32d26b4da06f548278671de0adf498272de8dbb3195f4ffd30a9e37681e090da-merged.mount: Deactivated successfully.
Jan 20 19:13:20 compute-0 podman[156647]: 2026-01-20 19:13:20.310827163 +0000 UTC m=+1.018251379 container remove 0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=quirky_payne, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:13:20 compute-0 systemd[1]: libpod-conmon-0ec06379ab46b5049ad3a58b61f32d70d05481d35998070b3eb27533bc43bbd4.scope: Deactivated successfully.
Jan 20 19:13:20 compute-0 sudo[156540]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:20 compute-0 sudo[156940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shtujzmarrdshqzuddecbaoqfnhxoisq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936400.1127121-59-88278653895411/AnsiballZ_systemd_service.py'
Jan 20 19:13:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:13:20 compute-0 sudo[156940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:20 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:13:20 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:20 compute-0 sudo[156943]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:13:20 compute-0 sudo[156943]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:13:20 compute-0 sudo[156943]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:20 compute-0 python3.9[156942]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:13:20 compute-0 sudo[156940]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:20 compute-0 ceph-mon[75120]: pgmap v437: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:20 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:20 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:13:21 compute-0 sudo[157118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fczehucqletoanntyjlmilcyauvibtnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936400.830485-59-119903713795015/AnsiballZ_systemd_service.py'
Jan 20 19:13:21 compute-0 sudo[157118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:21 compute-0 python3.9[157120]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:13:21 compute-0 sudo[157118]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:21 compute-0 sudo[157271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxfclocwalmyvzhbrerezeemokozjgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936401.587217-59-147989301616422/AnsiballZ_systemd_service.py'
Jan 20 19:13:21 compute-0 sudo[157271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:22 compute-0 python3.9[157273]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:13:22 compute-0 sudo[157271]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:22 compute-0 ceph-mon[75120]: pgmap v438: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:23 compute-0 sudo[157424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lytdpzuplakjldnjrxbeabntatvwluxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936402.285775-59-54640690747866/AnsiballZ_systemd_service.py'
Jan 20 19:13:23 compute-0 sudo[157424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:23 compute-0 python3.9[157426]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:13:23 compute-0 sudo[157424]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:23 compute-0 sudo[157577]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hseurlwlgfcugucwwdxzsmrjajhrmtft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936403.4787912-59-186302979505359/AnsiballZ_systemd_service.py'
Jan 20 19:13:23 compute-0 sudo[157577]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:23 compute-0 ceph-mon[75120]: pgmap v439: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:24 compute-0 python3.9[157579]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:13:24 compute-0 sudo[157577]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:24 compute-0 sudo[157730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcjjvsjtrvdnvpcbyyptxaafkstawtal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936404.1660337-59-232770494303993/AnsiballZ_systemd_service.py'
Jan 20 19:13:24 compute-0 sudo[157730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:24 compute-0 python3.9[157732]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:13:24 compute-0 sudo[157730]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:25 compute-0 sudo[157883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhnbllexobhdlccnyaciwugfquzjntzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936404.8303711-59-157412629347294/AnsiballZ_systemd_service.py'
Jan 20 19:13:25 compute-0 sudo[157883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:25 compute-0 python3.9[157885]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:13:25 compute-0 sudo[157883]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:26 compute-0 sudo[158036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjmouliaqvcadowwlbfbxsdxmqlnpovb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936405.6330247-111-220848313662055/AnsiballZ_file.py'
Jan 20 19:13:26 compute-0 sudo[158036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:26 compute-0 python3.9[158038]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:26 compute-0 sudo[158036]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:26 compute-0 sudo[158188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grpvrawicymnitcibvkezoqqqhyajyde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936406.376371-111-200149558125678/AnsiballZ_file.py'
Jan 20 19:13:26 compute-0 sudo[158188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:26 compute-0 python3.9[158190]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:26 compute-0 sudo[158188]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:26 compute-0 ceph-mon[75120]: pgmap v440: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:27 compute-0 sudo[158340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dndugrmfnqbpxiczchqjrkchtnmrbnlk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936406.964562-111-196210907982896/AnsiballZ_file.py'
Jan 20 19:13:27 compute-0 sudo[158340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:27 compute-0 python3.9[158342]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:27 compute-0 sudo[158340]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:27 compute-0 sudo[158492]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhwknkwpmpcipamloewhknpakktgnnpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936407.5400019-111-232548626859670/AnsiballZ_file.py'
Jan 20 19:13:27 compute-0 sudo[158492]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:27 compute-0 python3.9[158494]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:27 compute-0 sudo[158492]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:28 compute-0 ceph-mon[75120]: pgmap v441: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:28 compute-0 sudo[158644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpmlzphatjwooevvqtbhzrvsqmagdouc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936408.0667036-111-11519295869310/AnsiballZ_file.py'
Jan 20 19:13:28 compute-0 sudo[158644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:28 compute-0 python3.9[158646]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:28 compute-0 sudo[158644]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:28 compute-0 sudo[158796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umowqldoufdcnxppgbuqwgtssclixhor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936408.630735-111-62376790178640/AnsiballZ_file.py'
Jan 20 19:13:28 compute-0 sudo[158796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:29 compute-0 python3.9[158798]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:29 compute-0 sudo[158796]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:29 compute-0 sudo[158948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kgmqdngewfbvqlamfmzxawiqwlfeczvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936409.2033472-111-21359235692078/AnsiballZ_file.py'
Jan 20 19:13:29 compute-0 sudo[158948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:29 compute-0 sshd-session[158951]: banner exchange: Connection from 104.218.165.188 port 53300: invalid format
Jan 20 19:13:29 compute-0 python3.9[158950]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:29 compute-0 sudo[158948]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:30 compute-0 sudo[159102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-numvgubhoqyfqxkjqolahuxqtoqiczny ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936409.8331218-161-270695388898268/AnsiballZ_file.py'
Jan 20 19:13:30 compute-0 sudo[159102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:30 compute-0 python3.9[159104]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:30 compute-0 sudo[159102]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:30 compute-0 sudo[159254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xymywfcscpbqgdgqgmsjgpkkjhimagpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936410.3851852-161-221178161146316/AnsiballZ_file.py'
Jan 20 19:13:30 compute-0 sudo[159254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:30 compute-0 podman[159256]: 2026-01-20 19:13:30.720846494 +0000 UTC m=+0.093839875 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 19:13:30 compute-0 python3.9[159257]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:30 compute-0 sudo[159254]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:30 compute-0 ceph-mon[75120]: pgmap v442: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:31 compute-0 sudo[159432]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywnxcdotojjcliqgpqbieqvrnyjirjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936410.961545-161-259783920015021/AnsiballZ_file.py'
Jan 20 19:13:31 compute-0 sudo[159432]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:31 compute-0 python3.9[159434]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:31 compute-0 sudo[159432]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:13:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5615 writes, 879 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 18.71 MB, 0.03 MB/s
                                           Interval WAL: 5615 writes, 879 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:13:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:13:31
Jan 20 19:13:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:13:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:13:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'backups']
Jan 20 19:13:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:13:31 compute-0 sudo[159584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkqbvrqdudnesrlndvydutfvqaidoilz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936411.5220509-161-12135502688067/AnsiballZ_file.py'
Jan 20 19:13:31 compute-0 sudo[159584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:31 compute-0 python3.9[159586]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:31 compute-0 sudo[159584]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:32 compute-0 sudo[159736]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rigndggitpefgxfrznyxcsafbfrfgbvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936412.0789828-161-32920978249084/AnsiballZ_file.py'
Jan 20 19:13:32 compute-0 sudo[159736]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:32 compute-0 python3.9[159738]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:32 compute-0 sudo[159736]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:32 compute-0 sudo[159888]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aasqkprfemldomuqvpufcascpkbrdgxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936412.6347563-161-70119117140686/AnsiballZ_file.py'
Jan 20 19:13:32 compute-0 sudo[159888]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:32 compute-0 ceph-mon[75120]: pgmap v443: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:33 compute-0 python3.9[159890]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:33 compute-0 sudo[159888]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:33 compute-0 sudo[160055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyzpsvorfkeskukdnfrgqgmwufdeagjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936413.1698024-161-154994613840462/AnsiballZ_file.py'
Jan 20 19:13:33 compute-0 sudo[160055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:33 compute-0 podman[160014]: 2026-01-20 19:13:33.425210157 +0000 UTC m=+0.052086891 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:13:33 compute-0 python3.9[160061]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:13:33 compute-0 sudo[160055]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:34 compute-0 sudo[160211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kepwlidkmgacgurdphtklxfdlnmaaury ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936413.8336103-212-221734330018224/AnsiballZ_command.py'
Jan 20 19:13:34 compute-0 sudo[160211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:34 compute-0 python3.9[160213]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:34 compute-0 sudo[160211]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:13:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:13:34 compute-0 ceph-mon[75120]: pgmap v444: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:35 compute-0 python3.9[160365]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 19:13:35 compute-0 sudo[160515]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-youqjdtngfbotwsuuwtbcspgnarckscp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936415.260486-230-20036421089856/AnsiballZ_systemd_service.py'
Jan 20 19:13:35 compute-0 sudo[160515]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:13:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6904 writes, 28K keys, 6904 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6904 writes, 1315 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6904 writes, 28K keys, 6904 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s
                                           Interval WAL: 6904 writes, 1315 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:13:35 compute-0 python3.9[160517]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:13:35 compute-0 systemd[1]: Reloading.
Jan 20 19:13:35 compute-0 systemd-rc-local-generator[160542]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:13:35 compute-0 systemd-sysv-generator[160547]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:13:36 compute-0 ceph-mon[75120]: pgmap v445: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:36 compute-0 sudo[160515]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:36 compute-0 sudo[160701]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgyhrvrdmxkbemksknpychnwsagyeola ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936416.3106143-238-218608599810908/AnsiballZ_command.py'
Jan 20 19:13:36 compute-0 sudo[160701]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:36 compute-0 python3.9[160703]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:36 compute-0 sudo[160701]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:37 compute-0 sudo[160854]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqarrsuxgucdgafxdqshpzpmktpehxpu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936416.9022932-238-275367427264285/AnsiballZ_command.py'
Jan 20 19:13:37 compute-0 sudo[160854]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:37 compute-0 python3.9[160856]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:37 compute-0 sudo[160854]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:37 compute-0 sudo[161007]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ythaksvswttplzfveyqtbtpgjyjdicyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936417.4856136-238-64775214153139/AnsiballZ_command.py'
Jan 20 19:13:37 compute-0 sudo[161007]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:37 compute-0 python3.9[161009]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:37 compute-0 sudo[161007]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:38 compute-0 ceph-mon[75120]: pgmap v446: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:38 compute-0 sudo[161160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tawpgilrumthoaegywebnagrngphnxti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936418.0574887-238-109898737039721/AnsiballZ_command.py'
Jan 20 19:13:38 compute-0 sudo[161160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:38 compute-0 python3.9[161162]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:38 compute-0 sudo[161160]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:38 compute-0 sudo[161313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pixgzwyzvzzakibabhkjgfgtomyoqccb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936418.6706636-238-172130631546884/AnsiballZ_command.py'
Jan 20 19:13:38 compute-0 sudo[161313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:39 compute-0 python3.9[161315]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:39 compute-0 sudo[161313]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:39 compute-0 sudo[161466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qjkeagarkjbbmyzelnwhetjluwbpibgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936419.2729986-238-163497567483334/AnsiballZ_command.py'
Jan 20 19:13:39 compute-0 sudo[161466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:39 compute-0 python3.9[161468]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:39 compute-0 sudo[161466]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:40 compute-0 sudo[161619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnxepwijncadihyshnfubcixsikafnbo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936419.8285077-238-223788423181637/AnsiballZ_command.py'
Jan 20 19:13:40 compute-0 sudo[161619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:40 compute-0 python3.9[161621]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:13:40 compute-0 sudo[161619]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:40 compute-0 ceph-mon[75120]: pgmap v447: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:41 compute-0 sudo[161772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmdnuajlzjkfoatqlzklwkwlqebzmgac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936420.5988207-292-211446381775543/AnsiballZ_getent.py'
Jan 20 19:13:41 compute-0 sudo[161772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:41 compute-0 python3.9[161774]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 20 19:13:41 compute-0 sudo[161772]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:41 compute-0 sudo[161925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wuwephtzktpmhxvdmorfonwmdpairymg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936421.395215-300-226608892998299/AnsiballZ_group.py'
Jan 20 19:13:41 compute-0 sudo[161925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:41 compute-0 python3.9[161927]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 19:13:42 compute-0 groupadd[161928]: group added to /etc/group: name=libvirt, GID=42473
Jan 20 19:13:42 compute-0 groupadd[161928]: group added to /etc/gshadow: name=libvirt
Jan 20 19:13:42 compute-0 groupadd[161928]: new group: name=libvirt, GID=42473
Jan 20 19:13:42 compute-0 sudo[161925]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:13:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5409 writes, 23K keys, 5409 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5409 writes, 759 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5409 writes, 23K keys, 5409 commit groups, 1.0 writes per commit group, ingest: 18.48 MB, 0.03 MB/s
                                           Interval WAL: 5409 writes, 759 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:13:42 compute-0 sudo[162083]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gpgnadwswfodfqbvxatplfhsvlqjllmp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936422.269728-308-247948522433898/AnsiballZ_user.py'
Jan 20 19:13:42 compute-0 sudo[162083]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:42 compute-0 ceph-mon[75120]: pgmap v448: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:42 compute-0 python3.9[162085]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 19:13:43 compute-0 useradd[162087]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 20 19:13:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:13:43 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:13:43 compute-0 sudo[162083]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:43 compute-0 sudo[162244]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slsmoxpohxabbtncsekpqujobvnqtcuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936423.4130247-319-249355672020097/AnsiballZ_setup.py'
Jan 20 19:13:43 compute-0 sudo[162244]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:43 compute-0 python3.9[162246]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:13:44 compute-0 sudo[162244]: pam_unix(sudo:session): session closed for user root
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:13:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:13:44 compute-0 sudo[162328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecnalgqpwbyaizornmicqdevejtmjuzv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936423.4130247-319-249355672020097/AnsiballZ_dnf.py'
Jan 20 19:13:44 compute-0 sudo[162328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:13:44 compute-0 python3.9[162330]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:13:45 compute-0 ceph-mon[75120]: pgmap v449: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:46 compute-0 ceph-mon[75120]: pgmap v450: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:46 compute-0 ceph-mgr[75417]: [devicehealth INFO root] Check health
Jan 20 19:13:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:47 compute-0 sshd-session[158978]: Connection closed by 104.218.165.188 port 53304
Jan 20 19:13:48 compute-0 sshd-session[162341]: Connection closed by 104.218.165.188 port 54882 [preauth]
Jan 20 19:13:48 compute-0 sshd-session[162343]: error: Protocol major versions differ: 2 vs. 1
Jan 20 19:13:48 compute-0 sshd-session[162343]: banner exchange: Connection from 104.218.165.188 port 54894: could not read protocol version
Jan 20 19:13:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:49 compute-0 ceph-mon[75120]: pgmap v451: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:50 compute-0 ceph-mon[75120]: pgmap v452: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:51 compute-0 ceph-mon[75120]: pgmap v453: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:54 compute-0 ceph-mon[75120]: pgmap v454: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:56 compute-0 ceph-mon[75120]: pgmap v455: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:13:58 compute-0 ceph-mon[75120]: pgmap v456: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:13:59 compute-0 ceph-mon[75120]: pgmap v457: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:01 compute-0 podman[162517]: 2026-01-20 19:14:01.409654729 +0000 UTC m=+0.084769499 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:14:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:02 compute-0 ceph-mon[75120]: pgmap v458: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:04 compute-0 podman[162550]: 2026-01-20 19:14:04.392037179 +0000 UTC m=+0.061077586 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 19:14:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:04 compute-0 ceph-mon[75120]: pgmap v459: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:14:05.438 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:14:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:14:05.439 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:14:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:14:05.439 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:14:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:06 compute-0 ceph-mon[75120]: pgmap v460: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:07 compute-0 ceph-mon[75120]: pgmap v461: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:10 compute-0 ceph-mon[75120]: pgmap v462: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:12 compute-0 ceph-mon[75120]: pgmap v463: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:13 compute-0 sshd-session[162571]: Invalid user eth from 45.148.10.240 port 56476
Jan 20 19:14:13 compute-0 sshd-session[162571]: Connection closed by invalid user eth 45.148.10.240 port 56476 [preauth]
Jan 20 19:14:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:14 compute-0 ceph-mon[75120]: pgmap v464: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:14 compute-0 kernel: SELinux:  Converting 2774 SID table entries...
Jan 20 19:14:14 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 19:14:14 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 19:14:14 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 19:14:14 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 19:14:14 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 19:14:14 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 19:14:14 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 19:14:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:16 compute-0 ceph-mon[75120]: pgmap v465: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:18 compute-0 ceph-mon[75120]: pgmap v466: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:19 compute-0 ceph-mon[75120]: pgmap v467: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:20 compute-0 sudo[162582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:14:20 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 20 19:14:20 compute-0 sudo[162582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:20 compute-0 sudo[162582]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:20 compute-0 sudo[162607]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:14:20 compute-0 sudo[162607]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:21 compute-0 sudo[162607]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 19:14:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:14:21 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:14:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:14:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:14:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:14:21 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:14:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:14:21 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:14:21 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:14:21 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:14:21 compute-0 sudo[162664]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:14:21 compute-0 sudo[162664]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:21 compute-0 sudo[162664]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:21 compute-0 sudo[162689]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:14:21 compute-0 sudo[162689]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:21 compute-0 podman[162726]: 2026-01-20 19:14:21.785488993 +0000 UTC m=+0.045724710 container create ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:14:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:21 compute-0 systemd[1]: Started libpod-conmon-ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b.scope.
Jan 20 19:14:21 compute-0 podman[162726]: 2026-01-20 19:14:21.766081196 +0000 UTC m=+0.026316943 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:14:21 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:21 compute-0 podman[162726]: 2026-01-20 19:14:21.894570422 +0000 UTC m=+0.154806169 container init ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:14:21 compute-0 podman[162726]: 2026-01-20 19:14:21.9031114 +0000 UTC m=+0.163347127 container start ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:14:21 compute-0 podman[162726]: 2026-01-20 19:14:21.9080465 +0000 UTC m=+0.168282217 container attach ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:14:21 compute-0 blissful_blackburn[162743]: 167 167
Jan 20 19:14:21 compute-0 systemd[1]: libpod-ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b.scope: Deactivated successfully.
Jan 20 19:14:21 compute-0 podman[162726]: 2026-01-20 19:14:21.915279059 +0000 UTC m=+0.175514776 container died ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 19:14:21 compute-0 systemd[1]: var-lib-containers-storage-overlay-75c8f0fa0cc91dac964428a5d773b5b97cfa65c14e4c3b43a534eb345aa986b2-merged.mount: Deactivated successfully.
Jan 20 19:14:21 compute-0 podman[162726]: 2026-01-20 19:14:21.964471425 +0000 UTC m=+0.224707142 container remove ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=blissful_blackburn, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:14:21 compute-0 systemd[1]: libpod-conmon-ce7e9cb9a612815ebc02deb31356bff754d818846bc4a5af476cf4fe4c69127b.scope: Deactivated successfully.
Jan 20 19:14:22 compute-0 podman[162766]: 2026-01-20 19:14:22.126763269 +0000 UTC m=+0.046638491 container create 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:14:22 compute-0 systemd[1]: Started libpod-conmon-20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa.scope.
Jan 20 19:14:22 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:22 compute-0 podman[162766]: 2026-01-20 19:14:22.10369546 +0000 UTC m=+0.023570712 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:22 compute-0 podman[162766]: 2026-01-20 19:14:22.227293849 +0000 UTC m=+0.147169101 container init 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 19:14:22 compute-0 podman[162766]: 2026-01-20 19:14:22.234572539 +0000 UTC m=+0.154447801 container start 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 20 19:14:22 compute-0 podman[162766]: 2026-01-20 19:14:22.24865803 +0000 UTC m=+0.168533272 container attach 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:14:22 compute-0 ceph-mon[75120]: pgmap v468: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:22 compute-0 angry_lewin[162783]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:14:22 compute-0 angry_lewin[162783]: --> All data devices are unavailable
Jan 20 19:14:22 compute-0 systemd[1]: libpod-20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa.scope: Deactivated successfully.
Jan 20 19:14:22 compute-0 podman[162766]: 2026-01-20 19:14:22.792282024 +0000 UTC m=+0.712157246 container died 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:14:22 compute-0 systemd[1]: var-lib-containers-storage-overlay-26860381f9634a39a869e3468cb4761a02907befc0a1af8d8a08048344ba5d7c-merged.mount: Deactivated successfully.
Jan 20 19:14:22 compute-0 podman[162766]: 2026-01-20 19:14:22.831386557 +0000 UTC m=+0.751261779 container remove 20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_lewin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:22 compute-0 systemd[1]: libpod-conmon-20e0653b0cc0b8ec67d69ba45b321da9edbcb056d90899ff985c756615cb9caa.scope: Deactivated successfully.
Jan 20 19:14:22 compute-0 sudo[162689]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:22 compute-0 sudo[162815]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:14:22 compute-0 sudo[162815]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:22 compute-0 sudo[162815]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:22 compute-0 sudo[162840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:14:22 compute-0 sudo[162840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:23 compute-0 podman[162877]: 2026-01-20 19:14:23.247668629 +0000 UTC m=+0.037829936 container create 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:14:23 compute-0 systemd[1]: Started libpod-conmon-51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261.scope.
Jan 20 19:14:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:23 compute-0 podman[162877]: 2026-01-20 19:14:23.320978097 +0000 UTC m=+0.111139414 container init 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:14:23 compute-0 podman[162877]: 2026-01-20 19:14:23.230875878 +0000 UTC m=+0.021037175 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:14:23 compute-0 podman[162877]: 2026-01-20 19:14:23.326673473 +0000 UTC m=+0.116834780 container start 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:14:23 compute-0 podman[162877]: 2026-01-20 19:14:23.329627078 +0000 UTC m=+0.119788385 container attach 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Jan 20 19:14:23 compute-0 peaceful_shtern[162893]: 167 167
Jan 20 19:14:23 compute-0 systemd[1]: libpod-51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261.scope: Deactivated successfully.
Jan 20 19:14:23 compute-0 podman[162877]: 2026-01-20 19:14:23.33147516 +0000 UTC m=+0.121636467 container died 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:14:23 compute-0 systemd[1]: var-lib-containers-storage-overlay-856c4f61f223d2f72657235249d5e7463168fb9d00ddf17db01de23080ab6bea-merged.mount: Deactivated successfully.
Jan 20 19:14:23 compute-0 podman[162877]: 2026-01-20 19:14:23.366419821 +0000 UTC m=+0.156581118 container remove 51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=peaceful_shtern, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 19:14:23 compute-0 systemd[1]: libpod-conmon-51efa6cebf584a28524849587be9783bb18fa01158cc613ce90792462206b261.scope: Deactivated successfully.
Jan 20 19:14:23 compute-0 podman[162916]: 2026-01-20 19:14:23.505806578 +0000 UTC m=+0.022855615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:14:23 compute-0 podman[162916]: 2026-01-20 19:14:23.604691632 +0000 UTC m=+0.121740649 container create 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle)
Jan 20 19:14:23 compute-0 systemd[1]: Started libpod-conmon-71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3.scope.
Jan 20 19:14:23 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:23 compute-0 podman[162916]: 2026-01-20 19:14:23.704003605 +0000 UTC m=+0.221052642 container init 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:14:23 compute-0 podman[162916]: 2026-01-20 19:14:23.710390846 +0000 UTC m=+0.227439863 container start 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:14:23 compute-0 podman[162916]: 2026-01-20 19:14:23.722200967 +0000 UTC m=+0.239249984 container attach 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True)
Jan 20 19:14:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:23 compute-0 hungry_clarke[162933]: {
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:     "0": [
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:         {
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "devices": [
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "/dev/loop3"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             ],
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_name": "ceph_lv0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_size": "21470642176",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "name": "ceph_lv0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "tags": {
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cluster_name": "ceph",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.crush_device_class": "",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.encrypted": "0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.objectstore": "bluestore",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osd_id": "0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.type": "block",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.vdo": "0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.with_tpm": "0"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             },
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "type": "block",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "vg_name": "ceph_vg0"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:         }
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:     ],
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:     "1": [
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:         {
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "devices": [
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "/dev/loop4"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             ],
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_name": "ceph_lv1",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_size": "21470642176",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "name": "ceph_lv1",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "tags": {
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cluster_name": "ceph",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.crush_device_class": "",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.encrypted": "0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.objectstore": "bluestore",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osd_id": "1",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.type": "block",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.vdo": "0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.with_tpm": "0"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             },
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "type": "block",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "vg_name": "ceph_vg1"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:         }
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:     ],
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:     "2": [
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:         {
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "devices": [
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "/dev/loop5"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             ],
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_name": "ceph_lv2",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_size": "21470642176",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "name": "ceph_lv2",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "tags": {
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.cluster_name": "ceph",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.crush_device_class": "",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.encrypted": "0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.objectstore": "bluestore",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osd_id": "2",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.type": "block",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.vdo": "0",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:                 "ceph.with_tpm": "0"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             },
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "type": "block",
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:             "vg_name": "ceph_vg2"
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:         }
Jan 20 19:14:23 compute-0 hungry_clarke[162933]:     ]
Jan 20 19:14:23 compute-0 hungry_clarke[162933]: }
Jan 20 19:14:24 compute-0 systemd[1]: libpod-71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3.scope: Deactivated successfully.
Jan 20 19:14:24 compute-0 podman[162916]: 2026-01-20 19:14:24.015537153 +0000 UTC m=+0.532586170 container died 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-68ba5856ddcf4958d31c45846c547d8ab5f4a19af1729a69a8f404f0d200be18-merged.mount: Deactivated successfully.
Jan 20 19:14:24 compute-0 podman[162916]: 2026-01-20 19:14:24.063038602 +0000 UTC m=+0.580087629 container remove 71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_clarke, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 19:14:24 compute-0 systemd[1]: libpod-conmon-71592999f72fbbf8dd00332200ffa75fea16dcc7cf2b12fb98f3f999dfd8c0e3.scope: Deactivated successfully.
Jan 20 19:14:24 compute-0 sudo[162840]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:24 compute-0 sudo[162953]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:14:24 compute-0 sudo[162953]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:24 compute-0 sudo[162953]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:24 compute-0 sudo[162978]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:14:24 compute-0 sudo[162978]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:24 compute-0 podman[163016]: 2026-01-20 19:14:24.522814895 +0000 UTC m=+0.037669153 container create 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:14:24 compute-0 systemd[1]: Started libpod-conmon-901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9.scope.
Jan 20 19:14:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:24 compute-0 podman[163016]: 2026-01-20 19:14:24.592086574 +0000 UTC m=+0.106940832 container init 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:14:24 compute-0 podman[163016]: 2026-01-20 19:14:24.597996504 +0000 UTC m=+0.112850762 container start 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:14:24 compute-0 podman[163016]: 2026-01-20 19:14:24.600909788 +0000 UTC m=+0.115764066 container attach 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:24 compute-0 youthful_darwin[163032]: 167 167
Jan 20 19:14:24 compute-0 systemd[1]: libpod-901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9.scope: Deactivated successfully.
Jan 20 19:14:24 compute-0 podman[163016]: 2026-01-20 19:14:24.601981222 +0000 UTC m=+0.116835480 container died 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:14:24 compute-0 podman[163016]: 2026-01-20 19:14:24.507405585 +0000 UTC m=+0.022259863 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:14:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-ddc0389ca93d8d94030372cb1cda21d5547e45927d8658d4e5c7bf36cbff3120-merged.mount: Deactivated successfully.
Jan 20 19:14:24 compute-0 podman[163016]: 2026-01-20 19:14:24.640448582 +0000 UTC m=+0.155302840 container remove 901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=youthful_darwin, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:14:24 compute-0 systemd[1]: libpod-conmon-901b896c48ec1846c26f43c20fca5b92d7bea67525b893437d73ffa74b7b78d9.scope: Deactivated successfully.
Jan 20 19:14:24 compute-0 podman[163055]: 2026-01-20 19:14:24.833611248 +0000 UTC m=+0.071797538 container create 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:14:24 compute-0 systemd[1]: Started libpod-conmon-2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff.scope.
Jan 20 19:14:24 compute-0 ceph-mon[75120]: pgmap v469: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:24 compute-0 podman[163055]: 2026-01-20 19:14:24.786561128 +0000 UTC m=+0.024747438 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:14:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:14:24 compute-0 podman[163055]: 2026-01-20 19:14:24.914742778 +0000 UTC m=+0.152929168 container init 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:14:24 compute-0 podman[163055]: 2026-01-20 19:14:24.925563477 +0000 UTC m=+0.163749767 container start 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:25 compute-0 podman[163055]: 2026-01-20 19:14:25.019517582 +0000 UTC m=+0.257703882 container attach 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:14:25 compute-0 kernel: SELinux:  Converting 2774 SID table entries...
Jan 20 19:14:25 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 19:14:25 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 19:14:25 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 19:14:25 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 19:14:25 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 19:14:25 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 19:14:25 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 19:14:25 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 20 19:14:25 compute-0 lvm[163157]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:14:25 compute-0 lvm[163158]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:14:25 compute-0 lvm[163158]: VG ceph_vg1 finished
Jan 20 19:14:25 compute-0 lvm[163157]: VG ceph_vg0 finished
Jan 20 19:14:25 compute-0 lvm[163160]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:14:25 compute-0 lvm[163160]: VG ceph_vg2 finished
Jan 20 19:14:25 compute-0 stoic_chebyshev[163072]: {}
Jan 20 19:14:25 compute-0 podman[163055]: 2026-01-20 19:14:25.695878666 +0000 UTC m=+0.934064956 container died 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:14:25 compute-0 systemd[1]: libpod-2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff.scope: Deactivated successfully.
Jan 20 19:14:25 compute-0 systemd[1]: libpod-2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff.scope: Consumed 1.218s CPU time.
Jan 20 19:14:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-c9c5f3237df8b237716a3baa06abce384ebdd8ec2df93f70e2075f695fb855c3-merged.mount: Deactivated successfully.
Jan 20 19:14:25 compute-0 podman[163055]: 2026-01-20 19:14:25.756980646 +0000 UTC m=+0.995166936 container remove 2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:14:25 compute-0 systemd[1]: libpod-conmon-2ea8fdd13638a88ca094916277e7120e2d88b7c9e24982946233c1fa666794ff.scope: Deactivated successfully.
Jan 20 19:14:25 compute-0 sudo[162978]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:14:25 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:14:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:14:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 20 19:14:25 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:14:25 compute-0 sudo[163177]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:14:25 compute-0 sudo[163177]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:14:25 compute-0 sudo[163177]: pam_unix(sudo:session): session closed for user root
Jan 20 19:14:27 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:14:27 compute-0 ceph-mon[75120]: pgmap v470: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 20 19:14:27 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:14:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 20 19:14:28 compute-0 ceph-mon[75120]: pgmap v471: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Jan 20 19:14:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:14:30 compute-0 ceph-mon[75120]: pgmap v472: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:14:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:14:31
Jan 20 19:14:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:14:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:14:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'backups', 'default.rgw.meta', 'volumes']
Jan 20 19:14:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:14:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:14:32 compute-0 podman[163202]: 2026-01-20 19:14:32.452578346 +0000 UTC m=+0.108231980 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Jan 20 19:14:32 compute-0 ceph-mon[75120]: pgmap v473: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:14:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:14:33 compute-0 ceph-mon[75120]: pgmap v474: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:14:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:14:35 compute-0 podman[163228]: 2026-01-20 19:14:35.374198906 +0000 UTC m=+0.049502154 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:14:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:14:36 compute-0 ceph-mon[75120]: pgmap v475: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:14:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 19:14:37 compute-0 ceph-mon[75120]: pgmap v476: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 19:14:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 19:14:40 compute-0 ceph-mon[75120]: pgmap v477: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 4.5 KiB/s rd, 0 B/s wr, 7 op/s
Jan 20 19:14:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:42 compute-0 ceph-mon[75120]: pgmap v478: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:14:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:14:44 compute-0 ceph-mon[75120]: pgmap v479: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:45 compute-0 ceph-mon[75120]: pgmap v480: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:48 compute-0 ceph-mon[75120]: pgmap v481: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:50 compute-0 ceph-mon[75120]: pgmap v482: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:52 compute-0 ceph-mon[75120]: pgmap v483: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:54 compute-0 ceph-mon[75120]: pgmap v484: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:56 compute-0 ceph-mon[75120]: pgmap v485: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:14:58 compute-0 ceph-mon[75120]: pgmap v486: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:14:59 compute-0 ceph-mon[75120]: pgmap v487: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:02 compute-0 ceph-mon[75120]: pgmap v488: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:03 compute-0 podman[178593]: 2026-01-20 19:15:03.403665761 +0000 UTC m=+0.080566200 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:15:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:04 compute-0 ceph-mon[75120]: pgmap v489: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:15:05.440 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:15:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:15:05.440 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:15:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:15:05.441 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:15:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:06 compute-0 podman[180124]: 2026-01-20 19:15:06.400834969 +0000 UTC m=+0.080290354 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 19:15:06 compute-0 ceph-mon[75120]: pgmap v490: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:08 compute-0 ceph-mon[75120]: pgmap v491: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:11 compute-0 ceph-mon[75120]: pgmap v492: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:12 compute-0 ceph-mon[75120]: pgmap v493: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:14 compute-0 ceph-mon[75120]: pgmap v494: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:17 compute-0 ceph-mon[75120]: pgmap v495: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:18 compute-0 ceph-mon[75120]: pgmap v496: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:20 compute-0 ceph-mon[75120]: pgmap v497: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:20 compute-0 kernel: SELinux:  Converting 2775 SID table entries...
Jan 20 19:15:20 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 20 19:15:20 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 20 19:15:20 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 20 19:15:20 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 20 19:15:20 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 20 19:15:20 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 20 19:15:20 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 20 19:15:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:21 compute-0 ceph-mon[75120]: pgmap v498: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:22 compute-0 groupadd[180177]: group added to /etc/group: name=dnsmasq, GID=992
Jan 20 19:15:22 compute-0 groupadd[180177]: group added to /etc/gshadow: name=dnsmasq
Jan 20 19:15:22 compute-0 groupadd[180177]: new group: name=dnsmasq, GID=992
Jan 20 19:15:22 compute-0 useradd[180184]: new user: name=dnsmasq, UID=991, GID=992, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 20 19:15:22 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 19:15:22 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 20 19:15:22 compute-0 dbus-broker-launch[760]: Noticed file-system modification, trigger reload.
Jan 20 19:15:23 compute-0 groupadd[180197]: group added to /etc/group: name=clevis, GID=991
Jan 20 19:15:23 compute-0 groupadd[180197]: group added to /etc/gshadow: name=clevis
Jan 20 19:15:23 compute-0 groupadd[180197]: new group: name=clevis, GID=991
Jan 20 19:15:23 compute-0 useradd[180204]: new user: name=clevis, UID=990, GID=991, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 20 19:15:23 compute-0 usermod[180214]: add 'clevis' to group 'tss'
Jan 20 19:15:23 compute-0 usermod[180214]: add 'clevis' to shadow group 'tss'
Jan 20 19:15:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:24 compute-0 ceph-mon[75120]: pgmap v499: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:25 compute-0 sudo[180238]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:25 compute-0 sudo[180238]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:25 compute-0 sudo[180238]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:26 compute-0 sudo[180263]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:15:26 compute-0 sudo[180263]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:26 compute-0 polkitd[43397]: Reloading rules
Jan 20 19:15:26 compute-0 polkitd[43397]: Collecting garbage unconditionally...
Jan 20 19:15:26 compute-0 polkitd[43397]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 19:15:26 compute-0 polkitd[43397]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 19:15:26 compute-0 polkitd[43397]: Finished loading, compiling and executing 3 rules
Jan 20 19:15:26 compute-0 polkitd[43397]: Reloading rules
Jan 20 19:15:26 compute-0 polkitd[43397]: Collecting garbage unconditionally...
Jan 20 19:15:26 compute-0 polkitd[43397]: Loading rules from directory /etc/polkit-1/rules.d
Jan 20 19:15:26 compute-0 polkitd[43397]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 20 19:15:26 compute-0 polkitd[43397]: Finished loading, compiling and executing 3 rules
Jan 20 19:15:26 compute-0 podman[180351]: 2026-01-20 19:15:26.479302036 +0000 UTC m=+0.074247370 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:15:26 compute-0 podman[180351]: 2026-01-20 19:15:26.591839598 +0000 UTC m=+0.186784932 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:15:26 compute-0 ceph-mon[75120]: pgmap v500: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:27 compute-0 sudo[180263]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:15:27 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:15:27 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:27 compute-0 sudo[180673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:27 compute-0 sudo[180673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:27 compute-0 sudo[180673]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:27 compute-0 sudo[180708]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:15:27 compute-0 sudo[180708]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:28 compute-0 sudo[180708]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:15:28 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:15:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:15:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:15:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:15:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:15:28 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:15:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:15:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:15:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:15:28 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:15:28 compute-0 sudo[180768]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:28 compute-0 sudo[180768]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:28 compute-0 sudo[180768]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:28 compute-0 sudo[180793]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:15:28 compute-0 sudo[180793]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:28 compute-0 ceph-mon[75120]: pgmap v501: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:15:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:15:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:15:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:15:28 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:15:28 compute-0 podman[180829]: 2026-01-20 19:15:28.541633716 +0000 UTC m=+0.041278928 container create 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:15:28 compute-0 systemd[1]: Started libpod-conmon-6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130.scope.
Jan 20 19:15:28 compute-0 podman[180829]: 2026-01-20 19:15:28.52334819 +0000 UTC m=+0.022993422 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:15:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:28 compute-0 podman[180829]: 2026-01-20 19:15:28.647232008 +0000 UTC m=+0.146877240 container init 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3)
Jan 20 19:15:28 compute-0 podman[180829]: 2026-01-20 19:15:28.655894689 +0000 UTC m=+0.155539901 container start 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:15:28 compute-0 podman[180829]: 2026-01-20 19:15:28.660291086 +0000 UTC m=+0.159936298 container attach 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:15:28 compute-0 exciting_nash[180846]: 167 167
Jan 20 19:15:28 compute-0 systemd[1]: libpod-6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130.scope: Deactivated successfully.
Jan 20 19:15:28 compute-0 conmon[180846]: conmon 6ef8c9cb15dfbee19999 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130.scope/container/memory.events
Jan 20 19:15:28 compute-0 podman[180829]: 2026-01-20 19:15:28.664628613 +0000 UTC m=+0.164273845 container died 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:28 compute-0 systemd[1]: var-lib-containers-storage-overlay-1ba4fb6cd5a13a74fe7b08fc996aea0c1503ef7512d2a17f37f5d332c0573b9f-merged.mount: Deactivated successfully.
Jan 20 19:15:28 compute-0 podman[180829]: 2026-01-20 19:15:28.720954715 +0000 UTC m=+0.220599927 container remove 6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_nash, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle)
Jan 20 19:15:28 compute-0 systemd[1]: libpod-conmon-6ef8c9cb15dfbee199992a5aa07200ededb7597e3e303268951395c2bd013130.scope: Deactivated successfully.
Jan 20 19:15:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:28 compute-0 podman[180869]: 2026-01-20 19:15:28.886839676 +0000 UTC m=+0.050114211 container create 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:15:28 compute-0 systemd[1]: Started libpod-conmon-2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626.scope.
Jan 20 19:15:28 compute-0 podman[180869]: 2026-01-20 19:15:28.867135546 +0000 UTC m=+0.030410081 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:15:28 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:28 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:28 compute-0 podman[180869]: 2026-01-20 19:15:28.978537541 +0000 UTC m=+0.141812096 container init 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:28 compute-0 podman[180869]: 2026-01-20 19:15:28.985924221 +0000 UTC m=+0.149198746 container start 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:15:28 compute-0 podman[180869]: 2026-01-20 19:15:28.989699763 +0000 UTC m=+0.152974318 container attach 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:15:29 compute-0 intelligent_ganguly[180886]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:15:29 compute-0 intelligent_ganguly[180886]: --> All data devices are unavailable
Jan 20 19:15:29 compute-0 systemd[1]: libpod-2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626.scope: Deactivated successfully.
Jan 20 19:15:29 compute-0 podman[180869]: 2026-01-20 19:15:29.462700858 +0000 UTC m=+0.625975383 container died 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-b372eb13545021e02f6a0a763e09da840805a3989cb00283019133d9987104dd-merged.mount: Deactivated successfully.
Jan 20 19:15:29 compute-0 podman[180869]: 2026-01-20 19:15:29.509021476 +0000 UTC m=+0.672296001 container remove 2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:15:29 compute-0 systemd[1]: libpod-conmon-2a2ae920b968b28c8f64c51f9524b4eb9b3519754a860d8add9424293f0d0626.scope: Deactivated successfully.
Jan 20 19:15:29 compute-0 sudo[180793]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:29 compute-0 sudo[180917]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:29 compute-0 sudo[180917]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:29 compute-0 sudo[180917]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:29 compute-0 sudo[180942]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:15:29 compute-0 sudo[180942]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:29 compute-0 podman[180981]: 2026-01-20 19:15:29.97707517 +0000 UTC m=+0.036540631 container create 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:15:30 compute-0 systemd[1]: Started libpod-conmon-30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6.scope.
Jan 20 19:15:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:30 compute-0 podman[180981]: 2026-01-20 19:15:30.047805823 +0000 UTC m=+0.107271304 container init 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:15:30 compute-0 podman[180981]: 2026-01-20 19:15:30.054314472 +0000 UTC m=+0.113779933 container start 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:15:30 compute-0 podman[180981]: 2026-01-20 19:15:29.960178049 +0000 UTC m=+0.019643530 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:15:30 compute-0 podman[180981]: 2026-01-20 19:15:30.058006222 +0000 UTC m=+0.117471703 container attach 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Jan 20 19:15:30 compute-0 sweet_carson[181047]: 167 167
Jan 20 19:15:30 compute-0 systemd[1]: libpod-30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6.scope: Deactivated successfully.
Jan 20 19:15:30 compute-0 podman[180981]: 2026-01-20 19:15:30.060093153 +0000 UTC m=+0.119558614 container died 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-1e2d226889fbb364c343791803bf58c5a0ec1eb07da0309c244f6c7e497bffe8-merged.mount: Deactivated successfully.
Jan 20 19:15:30 compute-0 podman[180981]: 2026-01-20 19:15:30.094907601 +0000 UTC m=+0.154373062 container remove 30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sweet_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:15:30 compute-0 systemd[1]: libpod-conmon-30b67def7dd19dfed64e7a2018bab2b921a565fb67f76a485b265b6a338cc7b6.scope: Deactivated successfully.
Jan 20 19:15:30 compute-0 podman[181206]: 2026-01-20 19:15:30.24709152 +0000 UTC m=+0.042188169 container create d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 19:15:30 compute-0 systemd[1]: Started libpod-conmon-d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e.scope.
Jan 20 19:15:30 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:30 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:30 compute-0 podman[181206]: 2026-01-20 19:15:30.228289111 +0000 UTC m=+0.023385770 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:15:30 compute-0 podman[181206]: 2026-01-20 19:15:30.331767492 +0000 UTC m=+0.126864171 container init d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:15:30 compute-0 podman[181206]: 2026-01-20 19:15:30.337869842 +0000 UTC m=+0.132966491 container start d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:15:30 compute-0 podman[181206]: 2026-01-20 19:15:30.372229638 +0000 UTC m=+0.167326317 container attach d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:15:30 compute-0 compassionate_bose[181288]: {
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:     "0": [
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:         {
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "devices": [
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "/dev/loop3"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             ],
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_name": "ceph_lv0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_size": "21470642176",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "name": "ceph_lv0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "tags": {
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cluster_name": "ceph",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.crush_device_class": "",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.encrypted": "0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.objectstore": "bluestore",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osd_id": "0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.type": "block",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.vdo": "0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.with_tpm": "0"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             },
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "type": "block",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "vg_name": "ceph_vg0"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:         }
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:     ],
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:     "1": [
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:         {
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "devices": [
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "/dev/loop4"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             ],
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_name": "ceph_lv1",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_size": "21470642176",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "name": "ceph_lv1",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "tags": {
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cluster_name": "ceph",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.crush_device_class": "",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.encrypted": "0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.objectstore": "bluestore",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osd_id": "1",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.type": "block",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.vdo": "0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.with_tpm": "0"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             },
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "type": "block",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "vg_name": "ceph_vg1"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:         }
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:     ],
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:     "2": [
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:         {
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "devices": [
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "/dev/loop5"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             ],
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_name": "ceph_lv2",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_size": "21470642176",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "name": "ceph_lv2",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "tags": {
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.cluster_name": "ceph",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.crush_device_class": "",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.encrypted": "0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.objectstore": "bluestore",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osd_id": "2",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.type": "block",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.vdo": "0",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:                 "ceph.with_tpm": "0"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             },
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "type": "block",
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:             "vg_name": "ceph_vg2"
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:         }
Jan 20 19:15:30 compute-0 compassionate_bose[181288]:     ]
Jan 20 19:15:30 compute-0 compassionate_bose[181288]: }
Jan 20 19:15:30 compute-0 systemd[1]: libpod-d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e.scope: Deactivated successfully.
Jan 20 19:15:30 compute-0 podman[181206]: 2026-01-20 19:15:30.64808348 +0000 UTC m=+0.443180129 container died d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:15:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-19c6f06cbbc0cd66f36e24550e6822b6a6cb9cb74bf52e329dcc5045ae644136-merged.mount: Deactivated successfully.
Jan 20 19:15:30 compute-0 podman[181206]: 2026-01-20 19:15:30.692067952 +0000 UTC m=+0.487164611 container remove d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=compassionate_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:15:30 compute-0 systemd[1]: libpod-conmon-d85e6559be269de01e7fe6819d163c652fb856cda6661a75a895aa1528815d6e.scope: Deactivated successfully.
Jan 20 19:15:30 compute-0 sudo[180942]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:30 compute-0 sudo[181672]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:15:30 compute-0 sudo[181672]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:30 compute-0 sudo[181672]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:30 compute-0 sudo[181701]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:15:30 compute-0 sudo[181701]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:30 compute-0 ceph-mon[75120]: pgmap v502: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:30 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 20 19:15:30 compute-0 sshd[1008]: Received signal 15; terminating.
Jan 20 19:15:30 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 20 19:15:30 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 20 19:15:30 compute-0 systemd[1]: sshd.service: Consumed 3.350s CPU time, read 564.0K from disk, written 68.0K to disk.
Jan 20 19:15:30 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 20 19:15:30 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 20 19:15:30 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 19:15:30 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 19:15:30 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 20 19:15:30 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 20 19:15:30 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 20 19:15:30 compute-0 sshd[181730]: Server listening on 0.0.0.0 port 22.
Jan 20 19:15:30 compute-0 sshd[181730]: Server listening on :: port 22.
Jan 20 19:15:30 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 20 19:15:31 compute-0 podman[181758]: 2026-01-20 19:15:31.132726538 +0000 UTC m=+0.036620493 container create 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:15:31 compute-0 systemd[1]: Started libpod-conmon-769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b.scope.
Jan 20 19:15:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:31 compute-0 podman[181758]: 2026-01-20 19:15:31.202201041 +0000 UTC m=+0.106095006 container init 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:31 compute-0 podman[181758]: 2026-01-20 19:15:31.210679037 +0000 UTC m=+0.114573032 container start 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 19:15:31 compute-0 podman[181758]: 2026-01-20 19:15:31.115308964 +0000 UTC m=+0.019202949 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:15:31 compute-0 interesting_newton[181785]: 167 167
Jan 20 19:15:31 compute-0 podman[181758]: 2026-01-20 19:15:31.214170013 +0000 UTC m=+0.118063998 container attach 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:15:31 compute-0 systemd[1]: libpod-769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b.scope: Deactivated successfully.
Jan 20 19:15:31 compute-0 podman[181758]: 2026-01-20 19:15:31.217470033 +0000 UTC m=+0.121363998 container died 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:15:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-853f09be80d9311c7bd4e4e6e912ed84fc4fde97b5d4ea3d99b211375922bcbd-merged.mount: Deactivated successfully.
Jan 20 19:15:31 compute-0 podman[181758]: 2026-01-20 19:15:31.274304337 +0000 UTC m=+0.178198302 container remove 769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=interesting_newton, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 19:15:31 compute-0 systemd[1]: libpod-conmon-769239210b954749b0ca42ae453f8e3f3af19404ae8daf21a085579905622d3b.scope: Deactivated successfully.
Jan 20 19:15:31 compute-0 podman[181833]: 2026-01-20 19:15:31.436023928 +0000 UTC m=+0.039413321 container create 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:15:31 compute-0 systemd[1]: Started libpod-conmon-856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260.scope.
Jan 20 19:15:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:15:31 compute-0 podman[181833]: 2026-01-20 19:15:31.419037585 +0000 UTC m=+0.022426998 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:15:31 compute-0 podman[181833]: 2026-01-20 19:15:31.516486918 +0000 UTC m=+0.119876361 container init 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:15:31 compute-0 podman[181833]: 2026-01-20 19:15:31.523064599 +0000 UTC m=+0.126453992 container start 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:15:31 compute-0 podman[181833]: 2026-01-20 19:15:31.526897923 +0000 UTC m=+0.130287336 container attach 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:15:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:15:31
Jan 20 19:15:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:15:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:15:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'vms', 'images', 'volumes']
Jan 20 19:15:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:15:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:32 compute-0 lvm[182050]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:15:32 compute-0 lvm[182048]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:15:32 compute-0 lvm[182047]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:15:32 compute-0 lvm[182047]: VG ceph_vg0 finished
Jan 20 19:15:32 compute-0 lvm[182048]: VG ceph_vg1 finished
Jan 20 19:15:32 compute-0 lvm[182050]: VG ceph_vg2 finished
Jan 20 19:15:32 compute-0 frosty_satoshi[181859]: {}
Jan 20 19:15:32 compute-0 systemd[1]: libpod-856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260.scope: Deactivated successfully.
Jan 20 19:15:32 compute-0 systemd[1]: libpod-856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260.scope: Consumed 1.287s CPU time.
Jan 20 19:15:32 compute-0 podman[181833]: 2026-01-20 19:15:32.364231504 +0000 UTC m=+0.967620897 container died 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True)
Jan 20 19:15:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-7d0287d3f3ce276279a5d2d03bb2bf1203a7630e1d0faa378bc0fc22513f0ffe-merged.mount: Deactivated successfully.
Jan 20 19:15:32 compute-0 podman[181833]: 2026-01-20 19:15:32.409428785 +0000 UTC m=+1.012818178 container remove 856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:15:32 compute-0 systemd[1]: libpod-conmon-856c5cd01a4bfe423f0fc4870091ebd56dec904c9f737229b45b8502491c6260.scope: Deactivated successfully.
Jan 20 19:15:32 compute-0 sudo[181701]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:15:32 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:15:32 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:32 compute-0 sudo[182087]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:15:32 compute-0 sudo[182087]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:15:32 compute-0 sudo[182087]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 19:15:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 19:15:32 compute-0 systemd[1]: Reloading.
Jan 20 19:15:32 compute-0 ceph-mon[75120]: pgmap v503: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:32 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:32 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:15:32 compute-0 systemd-rc-local-generator[182176]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:32 compute-0 systemd-sysv-generator[182179]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 19:15:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:34 compute-0 podman[183726]: 2026-01-20 19:15:34.435674205 +0000 UTC m=+0.102912218 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:15:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:15:35 compute-0 ceph-mon[75120]: pgmap v504: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:36 compute-0 ceph-mon[75120]: pgmap v505: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:36 compute-0 sudo[162328]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:36 compute-0 podman[186485]: 2026-01-20 19:15:36.628263639 +0000 UTC m=+0.051917586 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:15:37 compute-0 sudo[187087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bousokgkvceukrjnzqtjkrimylpxsdaj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936536.451822-331-209992473338463/AnsiballZ_systemd.py'
Jan 20 19:15:37 compute-0 sudo[187087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:37 compute-0 python3.9[187113]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 19:15:37 compute-0 systemd[1]: Reloading.
Jan 20 19:15:37 compute-0 systemd-rc-local-generator[187630]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:37 compute-0 systemd-sysv-generator[187634]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:37 compute-0 sudo[187087]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:38 compute-0 sudo[188482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpebwohcshmcybasckmwwsusggkfqdvp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936537.8724797-331-175341759358249/AnsiballZ_systemd.py'
Jan 20 19:15:38 compute-0 sudo[188482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:38 compute-0 python3.9[188507]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 19:15:38 compute-0 systemd[1]: Reloading.
Jan 20 19:15:38 compute-0 systemd-sysv-generator[188973]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:38 compute-0 systemd-rc-local-generator[188970]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:38 compute-0 sudo[188482]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:38 compute-0 ceph-mon[75120]: pgmap v506: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:39 compute-0 sudo[189843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zampypirpbikeyezqyzlnalntipnxtth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936538.8991776-331-50886642212807/AnsiballZ_systemd.py'
Jan 20 19:15:39 compute-0 sudo[189843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:39 compute-0 python3.9[189868]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 19:15:39 compute-0 systemd[1]: Reloading.
Jan 20 19:15:39 compute-0 systemd-rc-local-generator[190351]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:39 compute-0 systemd-sysv-generator[190356]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:39 compute-0 sudo[189843]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:40 compute-0 ceph-mon[75120]: pgmap v507: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:40 compute-0 sudo[191176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pitckftjzffrloorohezvwutvzfwtgbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936539.9368386-331-147429351073315/AnsiballZ_systemd.py'
Jan 20 19:15:40 compute-0 sudo[191176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:40 compute-0 python3.9[191203]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 19:15:40 compute-0 systemd[1]: Reloading.
Jan 20 19:15:40 compute-0 systemd-rc-local-generator[191362]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:40 compute-0 systemd-sysv-generator[191366]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:40 compute-0 sudo[191176]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:40 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 19:15:40 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 19:15:40 compute-0 systemd[1]: man-db-cache-update.service: Consumed 9.622s CPU time.
Jan 20 19:15:40 compute-0 systemd[1]: run-r612b4a7e4bb1452bbb080a765c554355.service: Deactivated successfully.
Jan 20 19:15:41 compute-0 sudo[191525]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexnkxduokyrhsgvjzesstcstpfqfcjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936541.054711-360-62233475366751/AnsiballZ_systemd.py'
Jan 20 19:15:41 compute-0 sudo[191525]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:41 compute-0 python3.9[191527]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:41 compute-0 systemd[1]: Reloading.
Jan 20 19:15:41 compute-0 systemd-rc-local-generator[191559]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:41 compute-0 systemd-sysv-generator[191563]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:42 compute-0 sudo[191525]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:42 compute-0 sudo[191716]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anjfzotobgetkdomjwpneivsydrvecty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936542.1807902-360-221841286341611/AnsiballZ_systemd.py'
Jan 20 19:15:42 compute-0 sudo[191716]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:42 compute-0 python3.9[191718]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:42 compute-0 systemd[1]: Reloading.
Jan 20 19:15:42 compute-0 systemd-rc-local-generator[191742]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:42 compute-0 systemd-sysv-generator[191746]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:43 compute-0 ceph-mon[75120]: pgmap v508: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:43 compute-0 sudo[191716]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:44 compute-0 sudo[191906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vphiqccodgcocssxihqjxjrahrmdabaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936543.7280447-360-263226773661833/AnsiballZ_systemd.py'
Jan 20 19:15:44 compute-0 sudo[191906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:44 compute-0 python3.9[191908]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:44 compute-0 systemd[1]: Reloading.
Jan 20 19:15:44 compute-0 systemd-rc-local-generator[191940]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:44 compute-0 systemd-sysv-generator[191943]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:15:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:15:44 compute-0 ceph-mon[75120]: pgmap v509: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:44 compute-0 sudo[191906]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:45 compute-0 sudo[192097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doumzrshifraemlzcrnirtknboioynvl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936544.8356855-360-191223029630905/AnsiballZ_systemd.py'
Jan 20 19:15:45 compute-0 sudo[192097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:45 compute-0 python3.9[192099]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:45 compute-0 sudo[192097]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:45 compute-0 sudo[192252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcqlevjwahvgedtqelztyairnodqlgse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936545.54672-360-6988296370486/AnsiballZ_systemd.py'
Jan 20 19:15:45 compute-0 sudo[192252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:46 compute-0 python3.9[192254]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:46 compute-0 systemd[1]: Reloading.
Jan 20 19:15:46 compute-0 systemd-sysv-generator[192290]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:46 compute-0 systemd-rc-local-generator[192286]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:46 compute-0 sudo[192252]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:46 compute-0 sudo[192443]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jznmnkeepqerisehnrhrnboxdujfalhy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936546.6072009-396-4174550974881/AnsiballZ_systemd.py'
Jan 20 19:15:46 compute-0 sudo[192443]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:46 compute-0 ceph-mon[75120]: pgmap v510: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:47 compute-0 python3.9[192445]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 20 19:15:47 compute-0 systemd[1]: Reloading.
Jan 20 19:15:47 compute-0 systemd-rc-local-generator[192471]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:15:47 compute-0 systemd-sysv-generator[192479]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:15:47 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 20 19:15:47 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 20 19:15:47 compute-0 sudo[192443]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:48 compute-0 sudo[192636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlhlcvvbavmfzjsfiycvgfvskeviqdaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936547.7425385-404-4249936041728/AnsiballZ_systemd.py'
Jan 20 19:15:48 compute-0 sudo[192636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:48 compute-0 python3.9[192638]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:48 compute-0 sudo[192636]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:48 compute-0 sudo[192791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmzmushnhvpwldfbrdgbclprwrfgvuby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936548.4937468-404-251988236129099/AnsiballZ_systemd.py'
Jan 20 19:15:48 compute-0 sudo[192791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:48 compute-0 ceph-mon[75120]: pgmap v511: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:49 compute-0 python3.9[192793]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:49 compute-0 sudo[192791]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:49 compute-0 sudo[192946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmrujwzzvkbheusmuzjolkuaaeemwsko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936549.2208815-404-73419114980625/AnsiballZ_systemd.py'
Jan 20 19:15:49 compute-0 sudo[192946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:49 compute-0 python3.9[192948]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:49 compute-0 sudo[192946]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:49 compute-0 ceph-mon[75120]: pgmap v512: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:50 compute-0 sudo[193101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzgarytpufyyexiyqcnmygrvzbzthdzn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936549.985252-404-63835245778130/AnsiballZ_systemd.py'
Jan 20 19:15:50 compute-0 sudo[193101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:50 compute-0 python3.9[193103]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:50 compute-0 sudo[193101]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:51 compute-0 sudo[193256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnljufffdyvxljdukzzftagspnwpqaiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936550.7753146-404-48065231215268/AnsiballZ_systemd.py'
Jan 20 19:15:51 compute-0 sudo[193256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:51 compute-0 python3.9[193258]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:51 compute-0 sudo[193256]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:51 compute-0 sudo[193411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlsyizjunqdaontemwvmnqduxkrkgnyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936551.4846756-404-143495621752964/AnsiballZ_systemd.py'
Jan 20 19:15:51 compute-0 sudo[193411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:52 compute-0 python3.9[193413]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:52 compute-0 sudo[193411]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:52 compute-0 sudo[193566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vqmzdoiqedfiuxhzelpevutljojcjayw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936552.2076523-404-250594272478897/AnsiballZ_systemd.py'
Jan 20 19:15:52 compute-0 sudo[193566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:52 compute-0 python3.9[193568]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:52 compute-0 sudo[193566]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:52 compute-0 ceph-mon[75120]: pgmap v513: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:53 compute-0 sudo[193721]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yaaoxsgzijkuwqhenhhexwygwflmpzuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936552.9813154-404-25605614311818/AnsiballZ_systemd.py'
Jan 20 19:15:53 compute-0 sudo[193721]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:53 compute-0 python3.9[193723]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:53 compute-0 sudo[193721]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:53 compute-0 sudo[193876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ktgepqpsmrtdpmpwuvnbaonskhzcwglb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936553.7082796-404-199023986125969/AnsiballZ_systemd.py'
Jan 20 19:15:53 compute-0 sudo[193876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:53 compute-0 ceph-mon[75120]: pgmap v514: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:54 compute-0 python3.9[193878]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:54 compute-0 sudo[193876]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:54 compute-0 sudo[194031]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzsmzjtrkhhmtdknxdlcpjqtdqqctaec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936554.457039-404-274814872613104/AnsiballZ_systemd.py'
Jan 20 19:15:54 compute-0 sudo[194031]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:55 compute-0 python3.9[194033]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:55 compute-0 sudo[194031]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:55 compute-0 sudo[194186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmwdjbkwduxdyoqpzgqgstgmyplcuwzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936555.2135236-404-49482401589897/AnsiballZ_systemd.py'
Jan 20 19:15:55 compute-0 sudo[194186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:55 compute-0 python3.9[194188]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:55 compute-0 sudo[194186]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:56 compute-0 sudo[194341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feavouanzpiwhzzbxsrttdnmxxfbsfau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936555.9735124-404-200767535308933/AnsiballZ_systemd.py'
Jan 20 19:15:56 compute-0 sudo[194341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:56 compute-0 python3.9[194343]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:56 compute-0 sudo[194341]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:56 compute-0 ceph-mon[75120]: pgmap v515: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:57 compute-0 sudo[194496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfmlqemuafohdprqlagnxbxlqoxaidvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936556.743709-404-12956983276823/AnsiballZ_systemd.py'
Jan 20 19:15:57 compute-0 sudo[194496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:57 compute-0 python3.9[194498]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:57 compute-0 ceph-mon[75120]: pgmap v516: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:58 compute-0 sudo[194496]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:58 compute-0 sudo[194651]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsbmccdpsoegrvujsenszjjahoyfmrgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936558.4842656-404-17361377070643/AnsiballZ_systemd.py'
Jan 20 19:15:58 compute-0 sudo[194651]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:15:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:15:59 compute-0 python3.9[194653]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 20 19:15:59 compute-0 sudo[194651]: pam_unix(sudo:session): session closed for user root
Jan 20 19:15:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:15:59 compute-0 sudo[194806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lxyuwcvbxxdlfvtveghklskcwxszhanz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936559.593959-506-162912915081162/AnsiballZ_file.py'
Jan 20 19:15:59 compute-0 sudo[194806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:00 compute-0 python3.9[194808]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:16:00 compute-0 sudo[194806]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:00 compute-0 sudo[194958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmkkeembojdekoyxvigjjalygxroazsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936560.242559-506-240944256622525/AnsiballZ_file.py'
Jan 20 19:16:00 compute-0 sudo[194958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:00 compute-0 python3.9[194960]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:16:00 compute-0 sudo[194958]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:01 compute-0 sudo[195110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdpmyuskpgtmngxidxbhxukxbaulrpru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936560.802311-506-215392163171901/AnsiballZ_file.py'
Jan 20 19:16:01 compute-0 sudo[195110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:01 compute-0 python3.9[195112]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:16:01 compute-0 sudo[195110]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:01 compute-0 ceph-mon[75120]: pgmap v517: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:01 compute-0 sudo[195262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmpbrqggushvmeiltvtosyyhswwikznb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936561.353479-506-4495851935254/AnsiballZ_file.py'
Jan 20 19:16:01 compute-0 sudo[195262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:01 compute-0 python3.9[195264]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:16:01 compute-0 sudo[195262]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:02 compute-0 sudo[195414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjuoyibealiokejdvcfaseqfnnaktaap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936561.9522362-506-158571551514575/AnsiballZ_file.py'
Jan 20 19:16:02 compute-0 sudo[195414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:02 compute-0 python3.9[195416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:16:02 compute-0 ceph-mon[75120]: pgmap v518: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:02 compute-0 sudo[195414]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:02 compute-0 sudo[195566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nettllajanbiwnukhjzzozybjyflnyjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936562.5299213-506-155273376347892/AnsiballZ_file.py'
Jan 20 19:16:02 compute-0 sudo[195566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:02 compute-0 python3.9[195568]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:16:02 compute-0 sudo[195566]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:03 compute-0 python3.9[195718]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:16:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:04 compute-0 sudo[195868]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pnmwjhjpiqfpmzwilfcejbthxxowgrrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936563.8117375-557-237959807662936/AnsiballZ_stat.py'
Jan 20 19:16:04 compute-0 sudo[195868]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:04 compute-0 python3.9[195870]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:04 compute-0 sudo[195868]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:04 compute-0 ceph-mon[75120]: pgmap v519: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:04 compute-0 sudo[196004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hakmlbfwjaikigyqkaitnxybpdmuqakz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936563.8117375-557-237959807662936/AnsiballZ_copy.py'
Jan 20 19:16:04 compute-0 sudo[196004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:05 compute-0 podman[195967]: 2026-01-20 19:16:05.001476262 +0000 UTC m=+0.109701264 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:05 compute-0 python3.9[196012]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936563.8117375-557-237959807662936/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:05 compute-0 sudo[196004]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:16:05.441 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:16:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:16:05.442 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:16:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:16:05.442 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:16:05 compute-0 sudo[196169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wblfjujvbsxdogadvshzgplemrvlgcwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936565.2694316-557-139164561759388/AnsiballZ_stat.py'
Jan 20 19:16:05 compute-0 sudo[196169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:05 compute-0 python3.9[196171]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:05 compute-0 sudo[196169]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:05 compute-0 ceph-mon[75120]: pgmap v520: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:06 compute-0 sudo[196294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqkkxgblzsxwowiducerfuoyvhufbjtr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936565.2694316-557-139164561759388/AnsiballZ_copy.py'
Jan 20 19:16:06 compute-0 sudo[196294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:06 compute-0 python3.9[196296]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936565.2694316-557-139164561759388/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:06 compute-0 sudo[196294]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:06 compute-0 podman[196420]: 2026-01-20 19:16:06.78470076 +0000 UTC m=+0.064533593 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 20 19:16:06 compute-0 sudo[196463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tszbmbybvkwfgdnpyhmvkzpzibyzvyfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936566.4129913-557-204330862314184/AnsiballZ_stat.py'
Jan 20 19:16:06 compute-0 sudo[196463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:06 compute-0 python3.9[196467]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:07 compute-0 sudo[196463]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:07 compute-0 sudo[196590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-duimprrvecfthiwjpgtfzwnejthlbgmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936566.4129913-557-204330862314184/AnsiballZ_copy.py'
Jan 20 19:16:07 compute-0 sudo[196590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:07 compute-0 python3.9[196592]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936566.4129913-557-204330862314184/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:07 compute-0 sudo[196590]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:07 compute-0 sudo[196742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtlpmqpjzwlhoptjlhkvhpzzckkgosff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936567.647269-557-101964725429641/AnsiballZ_stat.py'
Jan 20 19:16:07 compute-0 sudo[196742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:08 compute-0 ceph-mon[75120]: pgmap v521: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:08 compute-0 python3.9[196744]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:08 compute-0 sudo[196742]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:08 compute-0 sudo[196867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmxjsymoufqjtsisrdvnsurulqkceezj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936567.647269-557-101964725429641/AnsiballZ_copy.py'
Jan 20 19:16:08 compute-0 sudo[196867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:08 compute-0 python3.9[196869]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936567.647269-557-101964725429641/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:08 compute-0 sudo[196867]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:09 compute-0 sudo[197019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iofsduljmloyelhguqrufbccmqswncjl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936568.7885964-557-91731422437573/AnsiballZ_stat.py'
Jan 20 19:16:09 compute-0 sudo[197019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:09 compute-0 python3.9[197021]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:09 compute-0 sudo[197019]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:09 compute-0 sudo[197144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvjpxqpzlukqxigbrjniwprxukunqzbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936568.7885964-557-91731422437573/AnsiballZ_copy.py'
Jan 20 19:16:09 compute-0 sudo[197144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:09 compute-0 python3.9[197146]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936568.7885964-557-91731422437573/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:09 compute-0 sudo[197144]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.930285) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569930315, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2038, "num_deletes": 251, "total_data_size": 3606422, "memory_usage": 3657432, "flush_reason": "Manual Compaction"}
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569947699, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3530265, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9740, "largest_seqno": 11777, "table_properties": {"data_size": 3520917, "index_size": 5970, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17725, "raw_average_key_size": 19, "raw_value_size": 3502532, "raw_average_value_size": 3840, "num_data_blocks": 271, "num_entries": 912, "num_filter_entries": 912, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936334, "oldest_key_time": 1768936334, "file_creation_time": 1768936569, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 17483 microseconds, and 6915 cpu microseconds.
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.947766) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3530265 bytes OK
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.947786) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.949229) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.949244) EVENT_LOG_v1 {"time_micros": 1768936569949240, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.949260) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3597929, prev total WAL file size 3597929, number of live WAL files 2.
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.950533) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3447KB)], [26(6112KB)]
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569950600, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9789444, "oldest_snapshot_seqno": -1}
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3723 keys, 8171744 bytes, temperature: kUnknown
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569995759, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 8171744, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8143215, "index_size": 18115, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9349, "raw_key_size": 89368, "raw_average_key_size": 24, "raw_value_size": 8072413, "raw_average_value_size": 2168, "num_data_blocks": 784, "num_entries": 3723, "num_filter_entries": 3723, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936569, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.995996) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 8171744 bytes
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.997937) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 216.5 rd, 180.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.4, 6.0 +0.0 blob) out(7.8 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4237, records dropped: 514 output_compression: NoCompression
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.997960) EVENT_LOG_v1 {"time_micros": 1768936569997949, "job": 10, "event": "compaction_finished", "compaction_time_micros": 45226, "compaction_time_cpu_micros": 17245, "output_level": 6, "num_output_files": 1, "total_output_size": 8171744, "num_input_records": 4237, "num_output_records": 3723, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569998652, "job": 10, "event": "table_file_deletion", "file_number": 28}
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936569999609, "job": 10, "event": "table_file_deletion", "file_number": 26}
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.950423) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999794) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999796) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999797) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:16:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:16:09.999799) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:16:10 compute-0 sudo[197296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmyenertxmovipfbazrjafsjhddwkekt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936569.9683702-557-25540099043272/AnsiballZ_stat.py'
Jan 20 19:16:10 compute-0 sudo[197296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:10 compute-0 python3.9[197298]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:10 compute-0 sudo[197296]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:10 compute-0 sudo[197421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hostqkzdrnzanvwfeqjkuwudfovvpobq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936569.9683702-557-25540099043272/AnsiballZ_copy.py'
Jan 20 19:16:10 compute-0 sudo[197421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:10 compute-0 python3.9[197423]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936569.9683702-557-25540099043272/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:10 compute-0 ceph-mon[75120]: pgmap v522: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:10 compute-0 sudo[197421]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:11 compute-0 sudo[197573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pdfdjzoykvthaliihiqgeijejvjhrmsi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936571.0617537-557-79250257004656/AnsiballZ_stat.py'
Jan 20 19:16:11 compute-0 sudo[197573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:11 compute-0 python3.9[197575]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:11 compute-0 sudo[197573]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:11 compute-0 sudo[197696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-awdujudsiwxmphkuafvznzyubrurhruc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936571.0617537-557-79250257004656/AnsiballZ_copy.py'
Jan 20 19:16:11 compute-0 sudo[197696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:11 compute-0 python3.9[197698]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936571.0617537-557-79250257004656/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:12 compute-0 sudo[197696]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:12 compute-0 ceph-mon[75120]: pgmap v523: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:12 compute-0 sudo[197848]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-asrjdficmwazjtvqbwegjxvyrkkjtuul ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936572.135374-557-75956552410966/AnsiballZ_stat.py'
Jan 20 19:16:12 compute-0 sudo[197848]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:12 compute-0 python3.9[197850]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:12 compute-0 sudo[197848]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:12 compute-0 sudo[197973]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irnksianvqmoeoyvpejyrzscvjhezxqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936572.135374-557-75956552410966/AnsiballZ_copy.py'
Jan 20 19:16:12 compute-0 sudo[197973]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:13 compute-0 python3.9[197975]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1768936572.135374-557-75956552410966/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:13 compute-0 sudo[197973]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:13 compute-0 sudo[198125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gltkkqwrhalbkqcdzxoclrhyitjqeule ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936573.513545-670-251151423137723/AnsiballZ_command.py'
Jan 20 19:16:13 compute-0 sudo[198125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:13 compute-0 python3.9[198127]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 20 19:16:13 compute-0 sudo[198125]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:14 compute-0 sudo[198278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjoagihcmjuxfvtfgdlacechqvczvjyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936574.1417916-679-18243568457817/AnsiballZ_file.py'
Jan 20 19:16:14 compute-0 sudo[198278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:14 compute-0 python3.9[198280]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:14 compute-0 sudo[198278]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:14 compute-0 sudo[198430]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqfzbvizdplfapayfwgewgspbqxeaotn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936574.7425754-679-170156733567134/AnsiballZ_file.py'
Jan 20 19:16:14 compute-0 sudo[198430]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:15 compute-0 ceph-mon[75120]: pgmap v524: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:15 compute-0 python3.9[198432]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:15 compute-0 sudo[198430]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:15 compute-0 sudo[198582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knnpylxrcokmurwalmfsxrkzizqqqveu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936575.334409-679-89523731470359/AnsiballZ_file.py'
Jan 20 19:16:15 compute-0 sudo[198582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:15 compute-0 python3.9[198584]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:15 compute-0 sudo[198582]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:16 compute-0 ceph-mon[75120]: pgmap v525: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:16 compute-0 sudo[198734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvlhnjmchhruvwbgwrsiqaeyeopyqbvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936575.974338-679-41556731040487/AnsiballZ_file.py'
Jan 20 19:16:16 compute-0 sudo[198734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:16 compute-0 python3.9[198736]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:16 compute-0 sudo[198734]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:16 compute-0 sudo[198886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cfsksnsirpvacrquxtbeltjqagkudods ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936576.5529923-679-109822812520552/AnsiballZ_file.py'
Jan 20 19:16:16 compute-0 sudo[198886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:16 compute-0 python3.9[198888]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:16 compute-0 sudo[198886]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:17 compute-0 sudo[199038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urxcvaruwjbrjzmrlimueezcwzfflstv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936577.1237485-679-264336957388402/AnsiballZ_file.py'
Jan 20 19:16:17 compute-0 sudo[199038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:17 compute-0 python3.9[199040]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:17 compute-0 sudo[199038]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:18 compute-0 sudo[199190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndcxharyfeumwibxgtnqcbgwukhiekoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936577.773275-679-188907290814346/AnsiballZ_file.py'
Jan 20 19:16:18 compute-0 sudo[199190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:18 compute-0 python3.9[199192]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:18 compute-0 sudo[199190]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:18 compute-0 sudo[199342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-btpqnkntykepdpibygbkbjvksrplgqso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936578.3889005-679-183094304548983/AnsiballZ_file.py'
Jan 20 19:16:18 compute-0 sudo[199342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:18 compute-0 python3.9[199344]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:18 compute-0 sudo[199342]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:18 compute-0 ceph-mon[75120]: pgmap v526: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:19 compute-0 sudo[199494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isjiktswomwxremxfutwuzsnpehkozpm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936578.9854243-679-233009099552384/AnsiballZ_file.py'
Jan 20 19:16:19 compute-0 sudo[199494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:19 compute-0 python3.9[199496]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:19 compute-0 sudo[199494]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:19 compute-0 sudo[199646]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-huprnzzyyyppnjbjphshulqsgpnlobgu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936579.6241195-679-66608181818701/AnsiballZ_file.py'
Jan 20 19:16:19 compute-0 sudo[199646]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:19 compute-0 ceph-mon[75120]: pgmap v527: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:20 compute-0 python3.9[199648]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:20 compute-0 sudo[199646]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:20 compute-0 sudo[199798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hutuqasxbuedbzhurgxlzlpxdcthfeyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936580.168861-679-164399506928542/AnsiballZ_file.py'
Jan 20 19:16:20 compute-0 sudo[199798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:20 compute-0 python3.9[199800]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:20 compute-0 sudo[199798]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:20 compute-0 sudo[199950]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urmihstlolcqtkjdcgqrmpguyfduitqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936580.7307136-679-46528681409237/AnsiballZ_file.py'
Jan 20 19:16:20 compute-0 sudo[199950]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:21 compute-0 python3.9[199952]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:21 compute-0 sudo[199950]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:21 compute-0 sudo[200102]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxszymmcyzpboolvamnleaozjiwviyiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936581.2732682-679-125228194733431/AnsiballZ_file.py'
Jan 20 19:16:21 compute-0 sudo[200102]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:21 compute-0 python3.9[200104]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:21 compute-0 sudo[200102]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:22 compute-0 sudo[200254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zyinocwfxsebzetrtgubqxmvapxbjfig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936581.843183-679-24772103633090/AnsiballZ_file.py'
Jan 20 19:16:22 compute-0 sudo[200254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:22 compute-0 python3.9[200256]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:22 compute-0 sudo[200254]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:22 compute-0 ceph-mon[75120]: pgmap v528: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:23 compute-0 sudo[200406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sznobjqqxetkirkawqgjhcdstutkghmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936582.8093393-778-120381698394021/AnsiballZ_stat.py'
Jan 20 19:16:23 compute-0 sudo[200406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:23 compute-0 python3.9[200408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:23 compute-0 sudo[200406]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:23 compute-0 sudo[200529]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afvgiwiqzfccpaloopeukcotprcqslzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936582.8093393-778-120381698394021/AnsiballZ_copy.py'
Jan 20 19:16:23 compute-0 sudo[200529]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:23 compute-0 python3.9[200531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936582.8093393-778-120381698394021/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:23 compute-0 sudo[200529]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:23 compute-0 ceph-mon[75120]: pgmap v529: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:24 compute-0 sudo[200681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icqbuiqsnuordyadttvnbnfxmblqogqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936583.9970818-778-269894785348466/AnsiballZ_stat.py'
Jan 20 19:16:24 compute-0 sudo[200681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:24 compute-0 python3.9[200683]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:24 compute-0 sudo[200681]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:24 compute-0 sudo[200804]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yujawdyqmvjrbcxhgamsbvwhzwnrsjfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936583.9970818-778-269894785348466/AnsiballZ_copy.py'
Jan 20 19:16:24 compute-0 sudo[200804]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:24 compute-0 python3.9[200806]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936583.9970818-778-269894785348466/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:24 compute-0 sudo[200804]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:25 compute-0 sudo[200956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzowagichxjhzgbytfnbafxfdsojucgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936585.107667-778-112668106749677/AnsiballZ_stat.py'
Jan 20 19:16:25 compute-0 sudo[200956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:25 compute-0 python3.9[200958]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:25 compute-0 sudo[200956]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:25 compute-0 sudo[201079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdxxszjesjlpqsvofxmccliwumkhmttg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936585.107667-778-112668106749677/AnsiballZ_copy.py'
Jan 20 19:16:25 compute-0 sudo[201079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:26 compute-0 python3.9[201081]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936585.107667-778-112668106749677/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:26 compute-0 sudo[201079]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:26 compute-0 sudo[201231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzmeuhbpglorjowfhmxuclakbjzhjhyp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936586.315435-778-64109133258443/AnsiballZ_stat.py'
Jan 20 19:16:26 compute-0 sudo[201231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:26 compute-0 python3.9[201233]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:26 compute-0 sudo[201231]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:26 compute-0 ceph-mon[75120]: pgmap v530: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:27 compute-0 sudo[201354]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgfwbshqjewfkoqwnflyrqojzjzykqpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936586.315435-778-64109133258443/AnsiballZ_copy.py'
Jan 20 19:16:27 compute-0 sudo[201354]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:27 compute-0 python3.9[201356]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936586.315435-778-64109133258443/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:27 compute-0 sudo[201354]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:27 compute-0 sudo[201506]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zhnyxlqigjkgdneafytbhlxgvmyldcsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936587.6177828-778-91431063390145/AnsiballZ_stat.py'
Jan 20 19:16:27 compute-0 sudo[201506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:28 compute-0 ceph-mon[75120]: pgmap v531: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:28 compute-0 python3.9[201508]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:28 compute-0 sudo[201506]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:28 compute-0 sudo[201629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeexhurydflgntdwddblpxzsvczegffg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936587.6177828-778-91431063390145/AnsiballZ_copy.py'
Jan 20 19:16:28 compute-0 sudo[201629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:28 compute-0 python3.9[201631]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936587.6177828-778-91431063390145/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:28 compute-0 sudo[201629]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:29 compute-0 sudo[201781]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jicevaumeewrzztfmppceoatfocczjhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936588.9957635-778-137321479195980/AnsiballZ_stat.py'
Jan 20 19:16:29 compute-0 sudo[201781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:29 compute-0 python3.9[201783]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:29 compute-0 sudo[201781]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:29 compute-0 sudo[201904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ghbccpnwermsqvtofoohzzgdsqvxwjte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936588.9957635-778-137321479195980/AnsiballZ_copy.py'
Jan 20 19:16:29 compute-0 sudo[201904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:29 compute-0 python3.9[201906]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936588.9957635-778-137321479195980/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:29 compute-0 sudo[201904]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:30 compute-0 sudo[202056]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yneejbdrqugsniglpytznvvlhawznrug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936590.0700088-778-101568891489931/AnsiballZ_stat.py'
Jan 20 19:16:30 compute-0 sudo[202056]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:30 compute-0 python3.9[202058]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:30 compute-0 sudo[202056]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:30 compute-0 sudo[202181]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efxzmzmnsiyvezirdlrzdvhtuxbtszpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936590.0700088-778-101568891489931/AnsiballZ_copy.py'
Jan 20 19:16:30 compute-0 sudo[202181]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:30 compute-0 ceph-mon[75120]: pgmap v532: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:31 compute-0 sshd-session[202059]: Invalid user solv from 45.148.10.240 port 34522
Jan 20 19:16:31 compute-0 python3.9[202183]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936590.0700088-778-101568891489931/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:31 compute-0 sudo[202181]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:31 compute-0 sshd-session[202059]: Connection closed by invalid user solv 45.148.10.240 port 34522 [preauth]
Jan 20 19:16:31 compute-0 sudo[202333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vzponllqsnzxecoamxhnuuggprcrfibx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936591.2716708-778-27383185228753/AnsiballZ_stat.py'
Jan 20 19:16:31 compute-0 sudo[202333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:16:31
Jan 20 19:16:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:16:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:16:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'images', 'vms', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Jan 20 19:16:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:16:31 compute-0 python3.9[202335]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:31 compute-0 sudo[202333]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:31 compute-0 ceph-mon[75120]: pgmap v533: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:31 compute-0 sudo[202456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcettajhazinaeddyygtkwpulsfuobwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936591.2716708-778-27383185228753/AnsiballZ_copy.py'
Jan 20 19:16:31 compute-0 sudo[202456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:32 compute-0 python3.9[202458]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936591.2716708-778-27383185228753/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:32 compute-0 sudo[202456]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:32 compute-0 sudo[202614]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atlxojcnkzgahksskiguleyengpnnreo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936592.3218641-778-241745188474360/AnsiballZ_stat.py'
Jan 20 19:16:32 compute-0 sudo[202614]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:32 compute-0 sudo[202604]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:16:32 compute-0 sudo[202604]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:32 compute-0 sudo[202604]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:32 compute-0 sudo[202636]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:16:32 compute-0 sudo[202636]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:32 compute-0 python3.9[202632]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:32 compute-0 sudo[202614]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:33 compute-0 sudo[202636]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:16:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:16:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:16:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:16:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:16:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:16:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:16:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:16:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:16:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:16:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:16:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:16:33 compute-0 sudo[202819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyciyglelnfbewxutozfcbtatbkuarav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936592.3218641-778-241745188474360/AnsiballZ_copy.py'
Jan 20 19:16:33 compute-0 sudo[202819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:16:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:16:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:16:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:16:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:16:33 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:16:33 compute-0 sudo[202806]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:16:33 compute-0 sudo[202806]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:33 compute-0 sudo[202806]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:33 compute-0 sudo[202840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:16:33 compute-0 sudo[202840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:33 compute-0 python3.9[202837]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936592.3218641-778-241745188474360/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:33 compute-0 sudo[202819]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:33 compute-0 podman[202901]: 2026-01-20 19:16:33.570995564 +0000 UTC m=+0.042348364 container create 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:16:33 compute-0 systemd[1]: Started libpod-conmon-420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2.scope.
Jan 20 19:16:33 compute-0 podman[202901]: 2026-01-20 19:16:33.55072134 +0000 UTC m=+0.022074160 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:16:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:33 compute-0 podman[202901]: 2026-01-20 19:16:33.688120939 +0000 UTC m=+0.159473759 container init 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:16:33 compute-0 podman[202901]: 2026-01-20 19:16:33.696091094 +0000 UTC m=+0.167443894 container start 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030)
Jan 20 19:16:33 compute-0 podman[202901]: 2026-01-20 19:16:33.699743983 +0000 UTC m=+0.171096783 container attach 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:16:33 compute-0 mystifying_meninsky[202963]: 167 167
Jan 20 19:16:33 compute-0 systemd[1]: libpod-420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2.scope: Deactivated successfully.
Jan 20 19:16:33 compute-0 podman[202901]: 2026-01-20 19:16:33.702600583 +0000 UTC m=+0.173953383 container died 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Jan 20 19:16:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-f7f2826f13a3909b2daa5671f409423d36ad726476113c67281ba41c132b8188-merged.mount: Deactivated successfully.
Jan 20 19:16:33 compute-0 podman[202901]: 2026-01-20 19:16:33.749615788 +0000 UTC m=+0.220968588 container remove 420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=mystifying_meninsky, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:16:33 compute-0 systemd[1]: libpod-conmon-420242c0b3708070d5eb41162d3f0e46339756f2bdf941dc2574a7fc35d1a4d2.scope: Deactivated successfully.
Jan 20 19:16:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:33 compute-0 sudo[203061]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pixxtjbxyfqamwraysrsodwhwvupjlhm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936593.5707355-778-279462881777694/AnsiballZ_stat.py'
Jan 20 19:16:33 compute-0 sudo[203061]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:33 compute-0 podman[203069]: 2026-01-20 19:16:33.906053792 +0000 UTC m=+0.036804088 container create d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:16:33 compute-0 systemd[1]: Started libpod-conmon-d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0.scope.
Jan 20 19:16:33 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:33 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:33 compute-0 podman[203069]: 2026-01-20 19:16:33.891177189 +0000 UTC m=+0.021927505 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:16:33 compute-0 podman[203069]: 2026-01-20 19:16:33.987850566 +0000 UTC m=+0.118600892 container init d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:16:33 compute-0 podman[203069]: 2026-01-20 19:16:33.995709928 +0000 UTC m=+0.126460224 container start d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:16:34 compute-0 podman[203069]: 2026-01-20 19:16:34.005587589 +0000 UTC m=+0.136337915 container attach d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:16:34 compute-0 python3.9[203063]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:34 compute-0 sudo[203061]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:34 compute-0 ceph-mon[75120]: pgmap v534: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:34 compute-0 sudo[203221]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eprqryjtdsiduptzonjrpfywaynncntt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936593.5707355-778-279462881777694/AnsiballZ_copy.py'
Jan 20 19:16:34 compute-0 sudo[203221]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:34 compute-0 flamboyant_jackson[203086]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:16:34 compute-0 flamboyant_jackson[203086]: --> All data devices are unavailable
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:16:34 compute-0 systemd[1]: libpod-d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0.scope: Deactivated successfully.
Jan 20 19:16:34 compute-0 podman[203069]: 2026-01-20 19:16:34.493294788 +0000 UTC m=+0.624045084 container died d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:16:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-d7483a97fa2ac6f9a8b78dd8d6c58fcccfc1a3b949f1b2400831683af9a01ab3-merged.mount: Deactivated successfully.
Jan 20 19:16:34 compute-0 podman[203069]: 2026-01-20 19:16:34.540170642 +0000 UTC m=+0.670920938 container remove d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_jackson, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:16:34 compute-0 systemd[1]: libpod-conmon-d0af22e574f644737e18db5774503ef14f3037cf7cf2febde5ea548ca52b93b0.scope: Deactivated successfully.
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:16:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:16:34 compute-0 python3.9[203225]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936593.5707355-778-279462881777694/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:34 compute-0 sudo[202840]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:34 compute-0 sudo[203221]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:34 compute-0 sudo[203243]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:16:34 compute-0 sudo[203243]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:34 compute-0 sudo[203243]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:34 compute-0 sudo[203285]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:16:34 compute-0 sudo[203285]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:34 compute-0 sudo[203467]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rksvuvbsuyuuyhhwagodtwwdiyattxru ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936594.7216325-778-41189754982359/AnsiballZ_stat.py'
Jan 20 19:16:34 compute-0 sudo[203467]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:35 compute-0 podman[203431]: 2026-01-20 19:16:34.948702811 +0000 UTC m=+0.019955648 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:16:35 compute-0 python3.9[203471]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:35 compute-0 sudo[203467]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:35 compute-0 auditd[702]: Audit daemon rotating log files
Jan 20 19:16:35 compute-0 podman[203431]: 2026-01-20 19:16:35.359407733 +0000 UTC m=+0.430660550 container create e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030)
Jan 20 19:16:35 compute-0 systemd[1]: Started libpod-conmon-e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3.scope.
Jan 20 19:16:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:35 compute-0 podman[203508]: 2026-01-20 19:16:35.436330159 +0000 UTC m=+0.110451424 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 19:16:35 compute-0 podman[203431]: 2026-01-20 19:16:35.458762916 +0000 UTC m=+0.530015763 container init e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:16:35 compute-0 podman[203431]: 2026-01-20 19:16:35.468151545 +0000 UTC m=+0.539404352 container start e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:16:35 compute-0 podman[203431]: 2026-01-20 19:16:35.471589908 +0000 UTC m=+0.542842755 container attach e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 19:16:35 compute-0 great_kepler[203563]: 167 167
Jan 20 19:16:35 compute-0 systemd[1]: libpod-e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3.scope: Deactivated successfully.
Jan 20 19:16:35 compute-0 conmon[203563]: conmon e155e937769e3514460f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3.scope/container/memory.events
Jan 20 19:16:35 compute-0 podman[203431]: 2026-01-20 19:16:35.478005395 +0000 UTC m=+0.549258202 container died e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:16:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-2e05e4dc795773f66da6fe1a9d029bad5de50a174672c47b7f63992547eac89d-merged.mount: Deactivated successfully.
Jan 20 19:16:35 compute-0 podman[203431]: 2026-01-20 19:16:35.518114392 +0000 UTC m=+0.589367209 container remove e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=great_kepler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:16:35 compute-0 systemd[1]: libpod-conmon-e155e937769e3514460fe1ef06e9f532c52d1b865597e39ba1ab2c0757f76df3.scope: Deactivated successfully.
Jan 20 19:16:35 compute-0 sudo[203638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzlvwdghhpuvwaikpvseefaexajmgkla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936594.7216325-778-41189754982359/AnsiballZ_copy.py'
Jan 20 19:16:35 compute-0 sudo[203638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:35 compute-0 podman[203646]: 2026-01-20 19:16:35.674564797 +0000 UTC m=+0.041998755 container create 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:16:35 compute-0 systemd[1]: Started libpod-conmon-31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0.scope.
Jan 20 19:16:35 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:35 compute-0 python3.9[203640]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936594.7216325-778-41189754982359/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:35 compute-0 podman[203646]: 2026-01-20 19:16:35.655937183 +0000 UTC m=+0.023371161 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:16:35 compute-0 podman[203646]: 2026-01-20 19:16:35.760324677 +0000 UTC m=+0.127758655 container init 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:16:35 compute-0 podman[203646]: 2026-01-20 19:16:35.76741808 +0000 UTC m=+0.134852038 container start 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:35 compute-0 podman[203646]: 2026-01-20 19:16:35.770442163 +0000 UTC m=+0.137876141 container attach 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:16:35 compute-0 sudo[203638]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]: {
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:     "0": [
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:         {
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "devices": [
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "/dev/loop3"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             ],
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_name": "ceph_lv0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_size": "21470642176",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "name": "ceph_lv0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "tags": {
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cluster_name": "ceph",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.crush_device_class": "",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.encrypted": "0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.objectstore": "bluestore",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osd_id": "0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.type": "block",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.vdo": "0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.with_tpm": "0"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             },
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "type": "block",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "vg_name": "ceph_vg0"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:         }
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:     ],
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:     "1": [
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:         {
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "devices": [
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "/dev/loop4"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             ],
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_name": "ceph_lv1",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_size": "21470642176",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "name": "ceph_lv1",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "tags": {
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cluster_name": "ceph",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.crush_device_class": "",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.encrypted": "0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.objectstore": "bluestore",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osd_id": "1",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.type": "block",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.vdo": "0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.with_tpm": "0"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             },
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "type": "block",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "vg_name": "ceph_vg1"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:         }
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:     ],
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:     "2": [
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:         {
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "devices": [
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "/dev/loop5"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             ],
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_name": "ceph_lv2",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_size": "21470642176",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "name": "ceph_lv2",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "tags": {
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.cluster_name": "ceph",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.crush_device_class": "",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.encrypted": "0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.objectstore": "bluestore",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osd_id": "2",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.type": "block",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.vdo": "0",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:                 "ceph.with_tpm": "0"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             },
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "type": "block",
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:             "vg_name": "ceph_vg2"
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:         }
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]:     ]
Jan 20 19:16:36 compute-0 kind_dubinsky[203662]: }
Jan 20 19:16:36 compute-0 systemd[1]: libpod-31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0.scope: Deactivated successfully.
Jan 20 19:16:36 compute-0 podman[203646]: 2026-01-20 19:16:36.094891093 +0000 UTC m=+0.462325051 container died 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-8d0a801b247881762510e2a91a6f2a2d09139fefe8ec318b28edf15ae157fd6a-merged.mount: Deactivated successfully.
Jan 20 19:16:36 compute-0 podman[203646]: 2026-01-20 19:16:36.136227372 +0000 UTC m=+0.503661330 container remove 31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:16:36 compute-0 systemd[1]: libpod-conmon-31172e4bff226cefd8145c6860daeeb621a2ce9758eda4baa39f87c8e5964fd0.scope: Deactivated successfully.
Jan 20 19:16:36 compute-0 sudo[203832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfgytwbshxkcetclqbjxlochhmhffvff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936595.9211617-778-109271389805803/AnsiballZ_stat.py'
Jan 20 19:16:36 compute-0 sudo[203285]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:36 compute-0 sudo[203832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:36 compute-0 sudo[203835]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:16:36 compute-0 sudo[203835]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:36 compute-0 sudo[203835]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:36 compute-0 sudo[203860]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:16:36 compute-0 sudo[203860]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:36 compute-0 python3.9[203834]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:36 compute-0 sudo[203832]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:36 compute-0 podman[203943]: 2026-01-20 19:16:36.543693605 +0000 UTC m=+0.039920595 container create 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:16:36 compute-0 systemd[1]: Started libpod-conmon-112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e.scope.
Jan 20 19:16:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:36 compute-0 podman[203943]: 2026-01-20 19:16:36.525733387 +0000 UTC m=+0.021960387 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:16:36 compute-0 podman[203943]: 2026-01-20 19:16:36.644596265 +0000 UTC m=+0.140823255 container init 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:16:36 compute-0 podman[203943]: 2026-01-20 19:16:36.652328473 +0000 UTC m=+0.148555463 container start 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:16:36 compute-0 determined_sutherland[203982]: 167 167
Jan 20 19:16:36 compute-0 podman[203943]: 2026-01-20 19:16:36.657046309 +0000 UTC m=+0.153273299 container attach 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:36 compute-0 systemd[1]: libpod-112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e.scope: Deactivated successfully.
Jan 20 19:16:36 compute-0 podman[203943]: 2026-01-20 19:16:36.657629432 +0000 UTC m=+0.153856422 container died 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:16:36 compute-0 systemd[1]: var-lib-containers-storage-overlay-545714ba0b27f4d0b001166e9a12a83f466210970902577e607b109b74fb749e-merged.mount: Deactivated successfully.
Jan 20 19:16:36 compute-0 sudo[204044]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxckowraffhwztlhbkjoncytalathliw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936595.9211617-778-109271389805803/AnsiballZ_copy.py'
Jan 20 19:16:36 compute-0 sudo[204044]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:36 compute-0 podman[203943]: 2026-01-20 19:16:36.711151777 +0000 UTC m=+0.207378767 container remove 112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=determined_sutherland, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:16:36 compute-0 systemd[1]: libpod-conmon-112cfe97d0a3582d08cfa7258e998ff2fec081cd1f33e682e28745eb0ff6698e.scope: Deactivated successfully.
Jan 20 19:16:36 compute-0 podman[204058]: 2026-01-20 19:16:36.880339162 +0000 UTC m=+0.055558665 container create 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:16:36 compute-0 python3.9[204050]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936595.9211617-778-109271389805803/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:36 compute-0 sudo[204044]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:36 compute-0 systemd[1]: Started libpod-conmon-5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e.scope.
Jan 20 19:16:36 compute-0 podman[204058]: 2026-01-20 19:16:36.849545021 +0000 UTC m=+0.024764554 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:16:36 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:16:36 compute-0 ceph-mon[75120]: pgmap v535: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:36 compute-0 podman[204058]: 2026-01-20 19:16:36.973010391 +0000 UTC m=+0.148229914 container init 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3)
Jan 20 19:16:36 compute-0 podman[204058]: 2026-01-20 19:16:36.980542075 +0000 UTC m=+0.155761578 container start 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:16:36 compute-0 podman[204058]: 2026-01-20 19:16:36.983979198 +0000 UTC m=+0.159198731 container attach 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:16:36 compute-0 podman[204072]: 2026-01-20 19:16:36.996915863 +0000 UTC m=+0.082411839 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 19:16:37 compute-0 sudo[204257]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqdwykerdyzlffvbyefltpibccwgprao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936597.0309184-778-143343513104843/AnsiballZ_stat.py'
Jan 20 19:16:37 compute-0 sudo[204257]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:37 compute-0 python3.9[204261]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:37 compute-0 sudo[204257]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:37 compute-0 lvm[204390]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:16:37 compute-0 lvm[204391]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:16:37 compute-0 lvm[204390]: VG ceph_vg0 finished
Jan 20 19:16:37 compute-0 lvm[204391]: VG ceph_vg1 finished
Jan 20 19:16:37 compute-0 lvm[204396]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:16:37 compute-0 lvm[204396]: VG ceph_vg2 finished
Jan 20 19:16:37 compute-0 tender_lovelace[204086]: {}
Jan 20 19:16:37 compute-0 sudo[204449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqsiguwxetrkbnjkbusiudridrcypazv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936597.0309184-778-143343513104843/AnsiballZ_copy.py'
Jan 20 19:16:37 compute-0 sudo[204449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:37 compute-0 systemd[1]: libpod-5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e.scope: Deactivated successfully.
Jan 20 19:16:37 compute-0 systemd[1]: libpod-5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e.scope: Consumed 1.303s CPU time.
Jan 20 19:16:37 compute-0 podman[204058]: 2026-01-20 19:16:37.821572868 +0000 UTC m=+0.996792391 container died 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:16:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:37 compute-0 python3.9[204451]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936597.0309184-778-143343513104843/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:37 compute-0 sudo[204449]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:38 compute-0 systemd[1]: var-lib-containers-storage-overlay-e99863ad9b87909a6a2f29adc33e25a71e6f1c18e1d6c7c3ddf4734b03249903-merged.mount: Deactivated successfully.
Jan 20 19:16:38 compute-0 ceph-mon[75120]: pgmap v536: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:38 compute-0 podman[204058]: 2026-01-20 19:16:38.051119434 +0000 UTC m=+1.226338937 container remove 5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=tender_lovelace, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:16:38 compute-0 systemd[1]: libpod-conmon-5b9683de9839b09e56af7715681f50af0504d9ee988c7cdeccd980707838767e.scope: Deactivated successfully.
Jan 20 19:16:38 compute-0 sudo[203860]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:16:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:16:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:16:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:16:38 compute-0 sudo[204511]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:16:38 compute-0 sudo[204511]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:16:38 compute-0 sudo[204511]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:38 compute-0 sudo[204638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiavymmnjahylgeovknogkpctjgrahej ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936598.1152465-778-276303699404132/AnsiballZ_stat.py'
Jan 20 19:16:38 compute-0 sudo[204638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:38 compute-0 python3.9[204640]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:38 compute-0 sudo[204638]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:38 compute-0 sudo[204761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzeagmibsuhiejymouhmumvpcoaocrlp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936598.1152465-778-276303699404132/AnsiballZ_copy.py'
Jan 20 19:16:38 compute-0 sudo[204761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:39 compute-0 python3.9[204763]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936598.1152465-778-276303699404132/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:39 compute-0 sudo[204761]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:16:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:16:39 compute-0 python3.9[204913]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:16:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:40 compute-0 ceph-mon[75120]: pgmap v537: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:40 compute-0 sudo[205066]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojejajpkmsrorqwtgzsdfbxwpnlexncz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936599.8411186-984-121099695771467/AnsiballZ_seboolean.py'
Jan 20 19:16:40 compute-0 sudo[205066]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:40 compute-0 python3.9[205068]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 20 19:16:41 compute-0 sudo[205066]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:41 compute-0 sudo[205222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fskibavcjvozqbckcveqnikpxdkbkqmh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936601.7321615-992-275505205055046/AnsiballZ_copy.py'
Jan 20 19:16:41 compute-0 dbus-broker-launch[773]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 20 19:16:41 compute-0 sudo[205222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:42 compute-0 python3.9[205224]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:42 compute-0 sudo[205222]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:42 compute-0 sudo[205374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbmiflzoeksgssjqagocrvugvdvxbmfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936602.3344593-992-72284496766684/AnsiballZ_copy.py'
Jan 20 19:16:42 compute-0 sudo[205374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:42 compute-0 python3.9[205376]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:42 compute-0 sudo[205374]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:42 compute-0 ceph-mon[75120]: pgmap v538: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:43 compute-0 sudo[205526]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vkllhqqiuucrozggzgnbdnmwqqyuakjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936602.9116895-992-125917295560111/AnsiballZ_copy.py'
Jan 20 19:16:43 compute-0 sudo[205526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:43 compute-0 python3.9[205528]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:43 compute-0 sudo[205526]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:43 compute-0 sudo[205678]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pebfrllkvobuyifyjlabbopaeuxffrjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936603.552856-992-30265664795832/AnsiballZ_copy.py'
Jan 20 19:16:43 compute-0 sudo[205678]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:43 compute-0 ceph-mon[75120]: pgmap v539: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:44 compute-0 python3.9[205680]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:44 compute-0 sudo[205678]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:44 compute-0 sudo[205830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmoecovemxmobwycwnujevsptbbceccd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936604.19519-992-164315589268858/AnsiballZ_copy.py'
Jan 20 19:16:44 compute-0 sudo[205830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:16:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:16:44 compute-0 python3.9[205832]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:44 compute-0 sudo[205830]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:45 compute-0 sudo[205982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kugwqxkcmtvcxwqybuimfrvkskgcktxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936604.8521357-1028-251700956454463/AnsiballZ_copy.py'
Jan 20 19:16:45 compute-0 sudo[205982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:45 compute-0 python3.9[205984]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:45 compute-0 sudo[205982]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:45 compute-0 sudo[206134]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlwlzibbhqnvavqalnhnpxqvodlelsix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936605.5062873-1028-226122502716575/AnsiballZ_copy.py'
Jan 20 19:16:45 compute-0 sudo[206134]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:45 compute-0 python3.9[206136]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:45 compute-0 sudo[206134]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:45 compute-0 ceph-mon[75120]: pgmap v540: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:46 compute-0 sudo[206286]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcdyvqdcpjmnknpfngqqunsvfypkucpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936606.0977378-1028-195370329381343/AnsiballZ_copy.py'
Jan 20 19:16:46 compute-0 sudo[206286]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:46 compute-0 python3.9[206288]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:46 compute-0 sudo[206286]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:46 compute-0 sudo[206438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pskymxalcgazmaieglqwfetvrxrujblk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936606.7280037-1028-40513221930422/AnsiballZ_copy.py'
Jan 20 19:16:47 compute-0 sudo[206438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:47 compute-0 python3.9[206440]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:47 compute-0 sudo[206438]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:47 compute-0 sudo[206590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvwsquydbhusxdtzjowxrdkwmvrkubkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936607.376519-1028-65292503911495/AnsiballZ_copy.py'
Jan 20 19:16:47 compute-0 sudo[206590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:47 compute-0 python3.9[206592]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:47 compute-0 sudo[206590]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:48 compute-0 sudo[206742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-movvnrmdvftlhsrmatxmohumhzblclwk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936608.038441-1064-69260961305930/AnsiballZ_systemd.py'
Jan 20 19:16:48 compute-0 sudo[206742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:48 compute-0 python3.9[206744]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:16:48 compute-0 systemd[1]: Reloading.
Jan 20 19:16:48 compute-0 systemd-rc-local-generator[206770]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:16:48 compute-0 systemd-sysv-generator[206773]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:16:48 compute-0 ceph-mon[75120]: pgmap v541: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:49 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 20 19:16:49 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 20 19:16:49 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 20 19:16:49 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 20 19:16:49 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 20 19:16:49 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 20 19:16:49 compute-0 sudo[206742]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:49 compute-0 sudo[206935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhvtwdyvsfbkqoqrveryzthkopohjrpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936609.4664612-1064-174474481260069/AnsiballZ_systemd.py'
Jan 20 19:16:49 compute-0 sudo[206935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:49 compute-0 ceph-mon[75120]: pgmap v542: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:50 compute-0 python3.9[206937]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:16:50 compute-0 systemd[1]: Reloading.
Jan 20 19:16:50 compute-0 systemd-sysv-generator[206967]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:16:50 compute-0 systemd-rc-local-generator[206964]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:16:50 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 20 19:16:50 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 20 19:16:50 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 20 19:16:50 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 20 19:16:50 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 20 19:16:50 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 20 19:16:50 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 19:16:50 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 20 19:16:50 compute-0 sudo[206935]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:50 compute-0 sudo[207150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmghdzcapdreudbhugsjriytgmojjdeo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936610.5652924-1064-179626816916126/AnsiballZ_systemd.py'
Jan 20 19:16:50 compute-0 sudo[207150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:51 compute-0 python3.9[207152]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:16:51 compute-0 systemd[1]: Reloading.
Jan 20 19:16:51 compute-0 systemd-rc-local-generator[207180]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:16:51 compute-0 systemd-sysv-generator[207183]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:16:51 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 20 19:16:51 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 20 19:16:51 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 20 19:16:51 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 20 19:16:51 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 20 19:16:51 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 20 19:16:51 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 20 19:16:51 compute-0 sudo[207150]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:51 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 20 19:16:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:51 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 20 19:16:51 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 20 19:16:51 compute-0 sudo[207369]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyxhydboiwpbgdjiobviefsvlfmhdttv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936611.6680343-1064-233155712693856/AnsiballZ_systemd.py'
Jan 20 19:16:51 compute-0 sudo[207369]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:51 compute-0 ceph-mon[75120]: pgmap v543: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:52 compute-0 python3.9[207371]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:16:52 compute-0 systemd[1]: Reloading.
Jan 20 19:16:52 compute-0 systemd-rc-local-generator[207400]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:16:52 compute-0 systemd-sysv-generator[207403]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:16:52 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 20 19:16:52 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 20 19:16:52 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 20 19:16:52 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 20 19:16:52 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 20 19:16:52 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 20 19:16:52 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 20 19:16:52 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 20 19:16:52 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 20 19:16:52 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 20 19:16:52 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 19:16:52 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 20 19:16:52 compute-0 sudo[207369]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:52 compute-0 setroubleshoot[207189]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d4fe53a8-32aa-419d-b759-530cad4fb2a7
Jan 20 19:16:52 compute-0 setroubleshoot[207189]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 20 19:16:52 compute-0 setroubleshoot[207189]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l d4fe53a8-32aa-419d-b759-530cad4fb2a7
Jan 20 19:16:52 compute-0 setroubleshoot[207189]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 20 19:16:53 compute-0 sudo[207587]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udchpgrafamgapumrbboftgjlqnyxyqj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936612.9588544-1064-217444918747576/AnsiballZ_systemd.py'
Jan 20 19:16:53 compute-0 sudo[207587]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:53 compute-0 python3.9[207589]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:16:53 compute-0 systemd[1]: Reloading.
Jan 20 19:16:53 compute-0 systemd-rc-local-generator[207615]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:16:53 compute-0 systemd-sysv-generator[207620]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:16:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:53 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 20 19:16:53 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 20 19:16:53 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 20 19:16:53 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 20 19:16:53 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 20 19:16:53 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 20 19:16:53 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 20 19:16:53 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 20 19:16:53 compute-0 ceph-mon[75120]: pgmap v544: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:54 compute-0 sudo[207587]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:54 compute-0 sudo[207799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhztxspljrrmglaerszxsijxagxpsbdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936614.2685597-1101-138738493910439/AnsiballZ_file.py'
Jan 20 19:16:54 compute-0 sudo[207799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:54 compute-0 python3.9[207801]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:54 compute-0 sudo[207799]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:55 compute-0 sudo[207951]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwtxnosoirdzjxvmztffmmmljidlywoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936614.8773103-1109-34319618026743/AnsiballZ_find.py'
Jan 20 19:16:55 compute-0 sudo[207951]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:55 compute-0 python3.9[207953]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 19:16:55 compute-0 sudo[207951]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:55 compute-0 sudo[208103]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aruounreegkxtinuakinlocavbbnuhjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936615.5037088-1117-208479342363400/AnsiballZ_command.py'
Jan 20 19:16:55 compute-0 sudo[208103]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:55 compute-0 python3.9[208105]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;
                                             echo ceph
                                             awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:16:55 compute-0 sudo[208103]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:55 compute-0 ceph-mon[75120]: pgmap v545: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:56 compute-0 python3.9[208259]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 19:16:57 compute-0 python3.9[208409]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:16:57 compute-0 python3.9[208530]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936616.9172997-1136-227311713050597/.source.xml follow=False _original_basename=secret.xml.j2 checksum=df9391033abbde40fc5cbff8cb85e1f03e415e51 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:57 compute-0 ceph-mon[75120]: pgmap v546: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:16:58 compute-0 sudo[208680]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zotawdcnjcoakjnotmmavizagwrhdjxn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936617.9280858-1151-6325902157850/AnsiballZ_command.py'
Jan 20 19:16:58 compute-0 sudo[208680]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:58 compute-0 python3.9[208682]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 90fff835-31df-513f-a409-b6642f04e6ac
                                             virsh secret-define --file /tmp/secret.xml
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:16:58 compute-0 polkitd[43397]: Registered Authentication Agent for unix-process:208684:323596 (system bus name :1.2548 [pkttyagent --process 208684 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 19:16:58 compute-0 polkitd[43397]: Unregistered Authentication Agent for unix-process:208684:323596 (system bus name :1.2548, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 19:16:58 compute-0 polkitd[43397]: Registered Authentication Agent for unix-process:208683:323595 (system bus name :1.2549 [pkttyagent --process 208683 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 19:16:58 compute-0 polkitd[43397]: Unregistered Authentication Agent for unix-process:208683:323595 (system bus name :1.2549, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 19:16:58 compute-0 sudo[208680]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:16:59 compute-0 python3.9[208844]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:16:59 compute-0 sudo[208994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ipxdramnonzcinrwfusnpqqrfwdmvbzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936619.1941931-1167-80116823040673/AnsiballZ_command.py'
Jan 20 19:16:59 compute-0 sudo[208994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:16:59 compute-0 sudo[208994]: pam_unix(sudo:session): session closed for user root
Jan 20 19:16:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:00 compute-0 ceph-mon[75120]: pgmap v547: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:00 compute-0 sudo[209147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpvdcbndhyttadwxkqaxyjwpnaighdtv ; FSID=90fff835-31df-513f-a409-b6642f04e6ac KEY=AQD40G9pAAAAABAAnCl2JBwdjyAhlZdo4nlc0A== /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936619.859444-1175-2027126014399/AnsiballZ_command.py'
Jan 20 19:17:00 compute-0 sudo[209147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:00 compute-0 polkitd[43397]: Registered Authentication Agent for unix-process:209150:323800 (system bus name :1.2552 [pkttyagent --process 209150 --notify-fd 4 --fallback], object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8)
Jan 20 19:17:00 compute-0 polkitd[43397]: Unregistered Authentication Agent for unix-process:209150:323800 (system bus name :1.2552, object path /org/freedesktop/PolicyKit1/AuthenticationAgent, locale en_US.UTF-8) (disconnected from bus)
Jan 20 19:17:00 compute-0 sudo[209147]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:00 compute-0 sudo[209305]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utoibscwznkmtxjkfgliyutilozzadfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936620.6453667-1183-109793698613925/AnsiballZ_copy.py'
Jan 20 19:17:00 compute-0 sudo[209305]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:01 compute-0 python3.9[209307]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:01 compute-0 sudo[209305]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:01 compute-0 sudo[209457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-njogiuzivegwveqbkoypffaltzopxzdj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936621.31131-1191-279479736714625/AnsiballZ_stat.py'
Jan 20 19:17:01 compute-0 sudo[209457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:01 compute-0 python3.9[209459]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:01 compute-0 sudo[209457]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:02 compute-0 sudo[209580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsnqjcpsohfmgirvhphvvtcbqmsjdhpy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936621.31131-1191-279479736714625/AnsiballZ_copy.py'
Jan 20 19:17:02 compute-0 sudo[209580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:02 compute-0 ceph-mon[75120]: pgmap v548: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:02 compute-0 python3.9[209582]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936621.31131-1191-279479736714625/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:02 compute-0 sudo[209580]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:02 compute-0 sudo[209732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiojnxqildhyhecuaeydyedzezsnlhxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936622.4492068-1207-94223838449147/AnsiballZ_file.py'
Jan 20 19:17:02 compute-0 sudo[209732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:02 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 20 19:17:02 compute-0 python3.9[209734]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:02 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 20 19:17:02 compute-0 sudo[209732]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:03 compute-0 sudo[209884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iweavpgjjagpyrrlxzyyqxhtdyjemgav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936623.1097698-1215-113348078506286/AnsiballZ_stat.py'
Jan 20 19:17:03 compute-0 sudo[209884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:03 compute-0 python3.9[209886]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:03 compute-0 sudo[209884]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:03 compute-0 sudo[209962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irqunhtuzzpizrvapupxhkybzvkfkgwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936623.1097698-1215-113348078506286/AnsiballZ_file.py'
Jan 20 19:17:03 compute-0 sudo[209962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:03 compute-0 python3.9[209964]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:03 compute-0 sudo[209962]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:04 compute-0 ceph-mon[75120]: pgmap v549: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:04 compute-0 sudo[210114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayaqxqlclsaqwtjlckcjxggwchlogavq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936624.1601582-1227-246888809479456/AnsiballZ_stat.py'
Jan 20 19:17:04 compute-0 sudo[210114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:04 compute-0 python3.9[210116]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:04 compute-0 sudo[210114]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:04 compute-0 sudo[210192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-juorqxvsjcwjeteqavwbhzackpbgusmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936624.1601582-1227-246888809479456/AnsiballZ_file.py'
Jan 20 19:17:04 compute-0 sudo[210192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:05 compute-0 python3.9[210194]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.ouxlbozi recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:05 compute-0 sudo[210192]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:17:05.442 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:17:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:17:05.443 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:17:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:17:05.444 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:17:05 compute-0 sudo[210361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkzjspaodceuckisczggzzqjqusnfvvt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936625.2538416-1239-41206892439116/AnsiballZ_stat.py'
Jan 20 19:17:05 compute-0 sudo[210361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:05 compute-0 podman[210318]: 2026-01-20 19:17:05.626204783 +0000 UTC m=+0.118746656 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:17:05 compute-0 python3.9[210369]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:05 compute-0 sudo[210361]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:05 compute-0 ceph-mon[75120]: pgmap v550: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:06 compute-0 sudo[210448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uliildewxzbllhfupxshyfkhuyngdbjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936625.2538416-1239-41206892439116/AnsiballZ_file.py'
Jan 20 19:17:06 compute-0 sudo[210448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:06 compute-0 python3.9[210450]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:06 compute-0 sudo[210448]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:06 compute-0 sudo[210600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkmgbyrcjnrzxhjascjtwzvyxpngrkum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936626.4850686-1252-186862489825873/AnsiballZ_command.py'
Jan 20 19:17:06 compute-0 sudo[210600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:06 compute-0 python3.9[210602]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:17:06 compute-0 sudo[210600]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:07 compute-0 podman[210680]: 2026-01-20 19:17:07.375978961 +0000 UTC m=+0.051819825 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 19:17:07 compute-0 sudo[210774]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxklqpufygxpxzomeajtozokrceljtau ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768936627.1341796-1260-87525827774299/AnsiballZ_edpm_nftables_from_files.py'
Jan 20 19:17:07 compute-0 sudo[210774]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:07 compute-0 python3[210776]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 20 19:17:07 compute-0 sudo[210774]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:07 compute-0 ceph-mon[75120]: pgmap v551: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:08 compute-0 sudo[210926]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhkqnxnxqvrszcabaluxipcxdznzwsut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936627.881162-1268-73417681366575/AnsiballZ_stat.py'
Jan 20 19:17:08 compute-0 sudo[210926]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:08 compute-0 python3.9[210928]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:08 compute-0 sudo[210926]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:08 compute-0 sudo[211004]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfcecsfdeyzrxpihghupqoikaghtqxwy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936627.881162-1268-73417681366575/AnsiballZ_file.py'
Jan 20 19:17:08 compute-0 sudo[211004]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:08 compute-0 python3.9[211006]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:08 compute-0 sudo[211004]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:09 compute-0 sudo[211156]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jexdpccvtpzudmzhdutoldfmuphzzxay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936628.9728224-1280-255564206229718/AnsiballZ_stat.py'
Jan 20 19:17:09 compute-0 sudo[211156]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:09 compute-0 python3.9[211158]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:09 compute-0 sudo[211156]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:09 compute-0 sudo[211281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-espmmyivkcaaidmzpncspsqrxrgjpuiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936628.9728224-1280-255564206229718/AnsiballZ_copy.py'
Jan 20 19:17:09 compute-0 sudo[211281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:09 compute-0 python3.9[211283]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936628.9728224-1280-255564206229718/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:09 compute-0 sudo[211281]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:09 compute-0 ceph-mon[75120]: pgmap v552: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:10 compute-0 sudo[211433]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwgvojppieipytbnuczrtgoehxicgomz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936630.1205637-1295-91651704863515/AnsiballZ_stat.py'
Jan 20 19:17:10 compute-0 sudo[211433]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:10 compute-0 python3.9[211435]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:10 compute-0 sudo[211433]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:10 compute-0 sudo[211511]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mupihjywitptgtrhkiycauuljykdltqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936630.1205637-1295-91651704863515/AnsiballZ_file.py'
Jan 20 19:17:10 compute-0 sudo[211511]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:11 compute-0 python3.9[211513]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:11 compute-0 sudo[211511]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:11 compute-0 sudo[211663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odgtisxonbmstosjhjbsfdtmikwloqdi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936631.1705937-1307-185316840446325/AnsiballZ_stat.py'
Jan 20 19:17:11 compute-0 sudo[211663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:11 compute-0 python3.9[211665]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:11 compute-0 sudo[211663]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:11 compute-0 sudo[211741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgfxowucjxmeypjgzajhvdpvyalqbsqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936631.1705937-1307-185316840446325/AnsiballZ_file.py'
Jan 20 19:17:11 compute-0 sudo[211741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:11 compute-0 ceph-mon[75120]: pgmap v553: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:12 compute-0 python3.9[211743]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:12 compute-0 sudo[211741]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:12 compute-0 sudo[211893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggbcxghpbryursvlmjwtxhtwfaixtdoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936632.2872982-1319-135757056697883/AnsiballZ_stat.py'
Jan 20 19:17:12 compute-0 sudo[211893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:12 compute-0 python3.9[211895]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:12 compute-0 sudo[211893]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:13 compute-0 sudo[212018]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kolwmavfodlyvxgqyjogjcbahhjrjucr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936632.2872982-1319-135757056697883/AnsiballZ_copy.py'
Jan 20 19:17:13 compute-0 sudo[212018]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:13 compute-0 python3.9[212020]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1768936632.2872982-1319-135757056697883/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:13 compute-0 sudo[212018]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:13 compute-0 sudo[212170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aciljfxxgluynarswczqievnxciflvvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936633.6712313-1334-118722593935102/AnsiballZ_file.py'
Jan 20 19:17:13 compute-0 sudo[212170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:13 compute-0 ceph-mon[75120]: pgmap v554: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:14 compute-0 python3.9[212172]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:14 compute-0 sudo[212170]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:14 compute-0 sudo[212322]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuxdakuodxjcdhvovfxjbsuwtknxgojn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936634.314858-1342-20266580968202/AnsiballZ_command.py'
Jan 20 19:17:14 compute-0 sudo[212322]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:14 compute-0 python3.9[212324]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:17:14 compute-0 sudo[212322]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:15 compute-0 sudo[212477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkffwhgbopadngajmzuhdzbalfnmwyym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936635.0144303-1350-178472026266901/AnsiballZ_blockinfile.py'
Jan 20 19:17:15 compute-0 sudo[212477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:15 compute-0 python3.9[212479]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:15 compute-0 sudo[212477]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:16 compute-0 sudo[212629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyvyfmfetnjyhrdngjxfytlpxhisljub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936635.936353-1359-273219171607045/AnsiballZ_command.py'
Jan 20 19:17:16 compute-0 sudo[212629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:16 compute-0 ceph-mon[75120]: pgmap v555: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:16 compute-0 python3.9[212631]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:17:16 compute-0 sudo[212629]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:16 compute-0 sudo[212782]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adyagynxiuqwomriywuclnsngqzxjlbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936636.6625621-1367-141591452635918/AnsiballZ_stat.py'
Jan 20 19:17:16 compute-0 sudo[212782]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:17 compute-0 python3.9[212784]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:17:17 compute-0 sudo[212782]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:17 compute-0 sudo[212936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xslsmtcwtuwtnvufrdmnmrxpaytrdxpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936637.3244636-1375-168261936559411/AnsiballZ_command.py'
Jan 20 19:17:17 compute-0 sudo[212936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:17 compute-0 python3.9[212938]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:17:17 compute-0 sudo[212936]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:18 compute-0 sudo[213091]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yvenwulnyvbxifkucuvqeoibuponghib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936638.03469-1383-145143125597583/AnsiballZ_file.py'
Jan 20 19:17:18 compute-0 sudo[213091]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:18 compute-0 python3.9[213093]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:18 compute-0 sudo[213091]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:18 compute-0 ceph-mon[75120]: pgmap v556: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:19 compute-0 sudo[213243]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qyakohfpqsicelwalqpchovqazwumnie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936638.7992146-1391-153261910856164/AnsiballZ_stat.py'
Jan 20 19:17:19 compute-0 sudo[213243]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:19 compute-0 python3.9[213245]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:19 compute-0 sudo[213243]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:19 compute-0 sudo[213366]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnufbfaumdbvbxzskbwssknehpmbiwto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936638.7992146-1391-153261910856164/AnsiballZ_copy.py'
Jan 20 19:17:19 compute-0 sudo[213366]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:19 compute-0 python3.9[213368]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936638.7992146-1391-153261910856164/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:19 compute-0 sudo[213366]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:20 compute-0 sudo[213518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aejlaccrwffuqgoplkzztzcjfnpcwjdh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936640.1311092-1406-218425547945734/AnsiballZ_stat.py'
Jan 20 19:17:20 compute-0 sudo[213518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:20 compute-0 ceph-mon[75120]: pgmap v557: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:20 compute-0 python3.9[213520]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:20 compute-0 sudo[213518]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:21 compute-0 sudo[213641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjutonnplxxonhjkekhpikwamhfqflgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936640.1311092-1406-218425547945734/AnsiballZ_copy.py'
Jan 20 19:17:21 compute-0 sudo[213641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:21 compute-0 python3.9[213643]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936640.1311092-1406-218425547945734/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:21 compute-0 sudo[213641]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:21 compute-0 sudo[213793]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-quwzhplpqjphyucgqhgzakomqanvepyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936641.4991426-1421-71834319203613/AnsiballZ_stat.py'
Jan 20 19:17:21 compute-0 sudo[213793]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:22 compute-0 python3.9[213795]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:22 compute-0 sudo[213793]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:22 compute-0 ceph-mon[75120]: pgmap v558: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:22 compute-0 sudo[213916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sorqnmbdoeiktbazrytrewcwucwuhpip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936641.4991426-1421-71834319203613/AnsiballZ_copy.py'
Jan 20 19:17:22 compute-0 sudo[213916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:22 compute-0 python3.9[213918]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936641.4991426-1421-71834319203613/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:22 compute-0 sudo[213916]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:23 compute-0 sudo[214068]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcvyyvrdkpfhrqqlzfxjdcdqibfltnzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936642.9386811-1436-224161342300640/AnsiballZ_systemd.py'
Jan 20 19:17:23 compute-0 sudo[214068]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:23 compute-0 python3.9[214070]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:17:23 compute-0 systemd[1]: Reloading.
Jan 20 19:17:23 compute-0 systemd-sysv-generator[214100]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:17:23 compute-0 systemd-rc-local-generator[214097]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:17:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:23 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 20 19:17:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:23 compute-0 sudo[214068]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:23 compute-0 ceph-mon[75120]: pgmap v559: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:24 compute-0 sudo[214259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mccyudwnjkffgztfngnzmrkblbhkkewg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936644.0685115-1444-233044549489359/AnsiballZ_systemd.py'
Jan 20 19:17:24 compute-0 sudo[214259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:24 compute-0 python3.9[214261]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 20 19:17:24 compute-0 systemd[1]: Reloading.
Jan 20 19:17:24 compute-0 systemd-sysv-generator[214291]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:17:24 compute-0 systemd-rc-local-generator[214288]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:17:25 compute-0 systemd[1]: Reloading.
Jan 20 19:17:25 compute-0 systemd-rc-local-generator[214327]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:17:25 compute-0 systemd-sysv-generator[214331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:17:25 compute-0 sudo[214259]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:25 compute-0 sshd-session[155346]: Connection closed by 192.168.122.30 port 59336
Jan 20 19:17:25 compute-0 sshd-session[155343]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:17:25 compute-0 systemd[1]: session-49.scope: Deactivated successfully.
Jan 20 19:17:25 compute-0 systemd[1]: session-49.scope: Consumed 3min 26.910s CPU time.
Jan 20 19:17:25 compute-0 systemd-logind[797]: Session 49 logged out. Waiting for processes to exit.
Jan 20 19:17:25 compute-0 systemd-logind[797]: Removed session 49.
Jan 20 19:17:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:25 compute-0 ceph-mon[75120]: pgmap v560: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:27 compute-0 ceph-mon[75120]: pgmap v561: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:30 compute-0 ceph-mon[75120]: pgmap v562: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:31 compute-0 sshd-session[214358]: Accepted publickey for zuul from 192.168.122.30 port 52502 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:17:31 compute-0 systemd-logind[797]: New session 50 of user zuul.
Jan 20 19:17:31 compute-0 systemd[1]: Started Session 50 of User zuul.
Jan 20 19:17:31 compute-0 sshd-session[214358]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:17:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:17:31
Jan 20 19:17:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:17:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:17:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'images', '.mgr', 'volumes', '.rgw.root']
Jan 20 19:17:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:17:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:31 compute-0 ceph-mon[75120]: pgmap v563: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:32 compute-0 python3.9[214511]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:17:33 compute-0 python3.9[214665]: ansible-ansible.builtin.service_facts Invoked
Jan 20 19:17:33 compute-0 network[214682]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 19:17:33 compute-0 network[214683]: 'network-scripts' will be removed from distribution in near future.
Jan 20 19:17:33 compute-0 network[214684]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 19:17:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:33 compute-0 ceph-mon[75120]: pgmap v564: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:17:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:17:35 compute-0 podman[214769]: 2026-01-20 19:17:35.796293621 +0000 UTC m=+0.110571310 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:17:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:35 compute-0 ceph-mon[75120]: pgmap v565: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:36 compute-0 sudo[214981]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykqgabcprrulkutmyoypfpvmkxnkxfqy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936656.597169-42-27381626290083/AnsiballZ_setup.py'
Jan 20 19:17:36 compute-0 sudo[214981]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:37 compute-0 python3.9[214983]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 20 19:17:37 compute-0 sudo[214981]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:37 compute-0 podman[214992]: 2026-01-20 19:17:37.544843766 +0000 UTC m=+0.052269281 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 19:17:37 compute-0 sudo[215084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilgdzshbsokvfgouxebkekxqpccaocao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936656.597169-42-27381626290083/AnsiballZ_dnf.py'
Jan 20 19:17:37 compute-0 sudo[215084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:37 compute-0 ceph-mon[75120]: pgmap v566: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:38 compute-0 python3.9[215086]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:17:38 compute-0 sudo[215088]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:17:38 compute-0 sudo[215088]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:38 compute-0 sudo[215088]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:38 compute-0 sudo[215113]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:17:38 compute-0 sudo[215113]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:38 compute-0 sudo[215113]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:17:38 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:17:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:17:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:17:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:17:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:17:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:17:38 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:17:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:17:38 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:17:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:17:38 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:17:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:38 compute-0 sudo[215166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:17:38 compute-0 sudo[215166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:38 compute-0 sudo[215166]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:38 compute-0 sudo[215191]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:17:38 compute-0 sudo[215191]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:17:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:17:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:17:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:17:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:17:39 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:17:39 compute-0 podman[215228]: 2026-01-20 19:17:39.181694554 +0000 UTC m=+0.034423673 container create 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:17:39 compute-0 systemd[1]: Started libpod-conmon-50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5.scope.
Jan 20 19:17:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:39 compute-0 podman[215228]: 2026-01-20 19:17:39.255670967 +0000 UTC m=+0.108400086 container init 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 19:17:39 compute-0 podman[215228]: 2026-01-20 19:17:39.166575955 +0000 UTC m=+0.019305094 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:17:39 compute-0 podman[215228]: 2026-01-20 19:17:39.265522233 +0000 UTC m=+0.118251362 container start 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:17:39 compute-0 podman[215228]: 2026-01-20 19:17:39.269248916 +0000 UTC m=+0.121978055 container attach 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:17:39 compute-0 flamboyant_kapitsa[215244]: 167 167
Jan 20 19:17:39 compute-0 systemd[1]: libpod-50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5.scope: Deactivated successfully.
Jan 20 19:17:39 compute-0 podman[215228]: 2026-01-20 19:17:39.275440322 +0000 UTC m=+0.128169471 container died 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:17:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-d1cbf375ac62b95710ad675672a4ca04303e9a4cd4da421b92adb2070db30e8d-merged.mount: Deactivated successfully.
Jan 20 19:17:39 compute-0 podman[215228]: 2026-01-20 19:17:39.585790262 +0000 UTC m=+0.438519391 container remove 50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=flamboyant_kapitsa, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:17:39 compute-0 systemd[1]: libpod-conmon-50c16160f360463648d3cc6d81fd0ab35edecae0591d54540a0b61a429018fb5.scope: Deactivated successfully.
Jan 20 19:17:39 compute-0 podman[215267]: 2026-01-20 19:17:39.754664712 +0000 UTC m=+0.039927282 container create e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:17:39 compute-0 systemd[1]: Started libpod-conmon-e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268.scope.
Jan 20 19:17:39 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:39 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:39 compute-0 podman[215267]: 2026-01-20 19:17:39.739885951 +0000 UTC m=+0.025148551 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:17:39 compute-0 podman[215267]: 2026-01-20 19:17:39.843033374 +0000 UTC m=+0.128295954 container init e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:17:39 compute-0 podman[215267]: 2026-01-20 19:17:39.854712286 +0000 UTC m=+0.139974846 container start e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:17:39 compute-0 podman[215267]: 2026-01-20 19:17:39.874389139 +0000 UTC m=+0.159651729 container attach e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:17:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:40 compute-0 ceph-mon[75120]: pgmap v567: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:40 compute-0 competent_mendel[215284]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:17:40 compute-0 competent_mendel[215284]: --> All data devices are unavailable
Jan 20 19:17:40 compute-0 systemd[1]: libpod-e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268.scope: Deactivated successfully.
Jan 20 19:17:40 compute-0 podman[215267]: 2026-01-20 19:17:40.327977837 +0000 UTC m=+0.613240427 container died e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:17:40 compute-0 systemd[1]: var-lib-containers-storage-overlay-436050a4bdf9ad18c4075ba8ed005e474704d76dc464403900b1d87b73dc8828-merged.mount: Deactivated successfully.
Jan 20 19:17:40 compute-0 podman[215267]: 2026-01-20 19:17:40.374202845 +0000 UTC m=+0.659465425 container remove e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_mendel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:17:40 compute-0 systemd[1]: libpod-conmon-e3f9711eb62ad3aba13aa47739f86a543bbfa9b46dd0a513ff593910aa8dc268.scope: Deactivated successfully.
Jan 20 19:17:40 compute-0 sudo[215191]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:40 compute-0 sudo[215316]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:17:40 compute-0 sudo[215316]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:40 compute-0 sudo[215316]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:40 compute-0 sudo[215341]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:17:40 compute-0 sudo[215341]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:40 compute-0 podman[215377]: 2026-01-20 19:17:40.849921677 +0000 UTC m=+0.038140875 container create 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:17:40 compute-0 podman[215377]: 2026-01-20 19:17:40.831597629 +0000 UTC m=+0.019816857 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:17:40 compute-0 systemd[1]: Started libpod-conmon-59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7.scope.
Jan 20 19:17:40 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:41 compute-0 podman[215377]: 2026-01-20 19:17:41.000280883 +0000 UTC m=+0.188500091 container init 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:17:41 compute-0 podman[215377]: 2026-01-20 19:17:41.006822416 +0000 UTC m=+0.195041614 container start 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:17:41 compute-0 intelligent_mestorf[215393]: 167 167
Jan 20 19:17:41 compute-0 systemd[1]: libpod-59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7.scope: Deactivated successfully.
Jan 20 19:17:41 compute-0 podman[215377]: 2026-01-20 19:17:41.011220166 +0000 UTC m=+0.199439384 container attach 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:17:41 compute-0 podman[215377]: 2026-01-20 19:17:41.013833792 +0000 UTC m=+0.202052990 container died 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-9585ac62579d0c488ac7f1d4cb79841924b053dae1f7ea692a23cd1a7b58388b-merged.mount: Deactivated successfully.
Jan 20 19:17:41 compute-0 podman[215377]: 2026-01-20 19:17:41.062234324 +0000 UTC m=+0.250453522 container remove 59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=intelligent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:17:41 compute-0 systemd[1]: libpod-conmon-59ca7670eddc90aa496420596736de01b6fd530dff470370624474c64e337cc7.scope: Deactivated successfully.
Jan 20 19:17:41 compute-0 podman[215418]: 2026-01-20 19:17:41.212802884 +0000 UTC m=+0.041988732 container create 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:17:41 compute-0 systemd[1]: Started libpod-conmon-3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8.scope.
Jan 20 19:17:41 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:41 compute-0 podman[215418]: 2026-01-20 19:17:41.194888826 +0000 UTC m=+0.024074704 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:17:41 compute-0 podman[215418]: 2026-01-20 19:17:41.309172188 +0000 UTC m=+0.138358036 container init 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Jan 20 19:17:41 compute-0 podman[215418]: 2026-01-20 19:17:41.316537042 +0000 UTC m=+0.145723020 container start 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default)
Jan 20 19:17:41 compute-0 podman[215418]: 2026-01-20 19:17:41.320403649 +0000 UTC m=+0.149589517 container attach 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:17:41 compute-0 reverent_dirac[215435]: {
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:     "0": [
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:         {
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "devices": [
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "/dev/loop3"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             ],
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_name": "ceph_lv0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_size": "21470642176",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "name": "ceph_lv0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "tags": {
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cluster_name": "ceph",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.crush_device_class": "",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.encrypted": "0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.objectstore": "bluestore",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osd_id": "0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.type": "block",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.vdo": "0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.with_tpm": "0"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             },
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "type": "block",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "vg_name": "ceph_vg0"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:         }
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:     ],
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:     "1": [
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:         {
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "devices": [
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "/dev/loop4"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             ],
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_name": "ceph_lv1",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_size": "21470642176",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "name": "ceph_lv1",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "tags": {
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cluster_name": "ceph",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.crush_device_class": "",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.encrypted": "0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.objectstore": "bluestore",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osd_id": "1",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.type": "block",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.vdo": "0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.with_tpm": "0"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             },
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "type": "block",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "vg_name": "ceph_vg1"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:         }
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:     ],
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:     "2": [
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:         {
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "devices": [
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "/dev/loop5"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             ],
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_name": "ceph_lv2",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_size": "21470642176",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "name": "ceph_lv2",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "tags": {
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.cluster_name": "ceph",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.crush_device_class": "",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.encrypted": "0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.objectstore": "bluestore",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osd_id": "2",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.type": "block",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.vdo": "0",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:                 "ceph.with_tpm": "0"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             },
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "type": "block",
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:             "vg_name": "ceph_vg2"
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:         }
Jan 20 19:17:41 compute-0 reverent_dirac[215435]:     ]
Jan 20 19:17:41 compute-0 reverent_dirac[215435]: }
Jan 20 19:17:41 compute-0 systemd[1]: libpod-3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8.scope: Deactivated successfully.
Jan 20 19:17:41 compute-0 podman[215418]: 2026-01-20 19:17:41.606974805 +0000 UTC m=+0.436160673 container died 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 19:17:41 compute-0 systemd[1]: var-lib-containers-storage-overlay-62706d5b25fced56e7916ac9329e1c04f319df2e922f74ccfb937fb20bd9b07f-merged.mount: Deactivated successfully.
Jan 20 19:17:41 compute-0 podman[215418]: 2026-01-20 19:17:41.649014258 +0000 UTC m=+0.478200106 container remove 3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:17:41 compute-0 systemd[1]: libpod-conmon-3980f87d366176f0c3e379d6da74f59403126445634ffdacc7cdd69817ab70c8.scope: Deactivated successfully.
Jan 20 19:17:41 compute-0 sudo[215341]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:41 compute-0 sudo[215457]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:17:41 compute-0 sudo[215457]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:41 compute-0 sudo[215457]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:41 compute-0 sudo[215482]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:17:41 compute-0 sudo[215482]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:41 compute-0 ceph-mon[75120]: pgmap v568: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:42 compute-0 podman[215519]: 2026-01-20 19:17:42.093550199 +0000 UTC m=+0.040306340 container create 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 19:17:42 compute-0 systemd[1]: Started libpod-conmon-70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1.scope.
Jan 20 19:17:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:42 compute-0 podman[215519]: 2026-01-20 19:17:42.077705452 +0000 UTC m=+0.024461613 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:17:42 compute-0 podman[215519]: 2026-01-20 19:17:42.17584271 +0000 UTC m=+0.122598931 container init 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:17:42 compute-0 podman[215519]: 2026-01-20 19:17:42.182108707 +0000 UTC m=+0.128864848 container start 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:17:42 compute-0 podman[215519]: 2026-01-20 19:17:42.186066945 +0000 UTC m=+0.132823106 container attach 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:17:42 compute-0 angry_banzai[215535]: 167 167
Jan 20 19:17:42 compute-0 systemd[1]: libpod-70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1.scope: Deactivated successfully.
Jan 20 19:17:42 compute-0 podman[215519]: 2026-01-20 19:17:42.188308332 +0000 UTC m=+0.135064473 container died 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:17:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-779b6b6ce031988af72dd0ff37c3d9dd57d569af15e5f7435c85a1fcf683120e-merged.mount: Deactivated successfully.
Jan 20 19:17:42 compute-0 podman[215519]: 2026-01-20 19:17:42.224244752 +0000 UTC m=+0.171000893 container remove 70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_banzai, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:17:42 compute-0 systemd[1]: libpod-conmon-70786e6b8f32bcfdcbf617cc96c2017e49c87504cf18a8d3e6096cff90e0def1.scope: Deactivated successfully.
Jan 20 19:17:42 compute-0 podman[215558]: 2026-01-20 19:17:42.455538524 +0000 UTC m=+0.103662007 container create ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 19:17:42 compute-0 podman[215558]: 2026-01-20 19:17:42.373741315 +0000 UTC m=+0.021864808 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:17:42 compute-0 systemd[1]: Started libpod-conmon-ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d.scope.
Jan 20 19:17:42 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:42 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:17:42 compute-0 podman[215558]: 2026-01-20 19:17:42.527231239 +0000 UTC m=+0.175354742 container init ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:17:42 compute-0 podman[215558]: 2026-01-20 19:17:42.535277751 +0000 UTC m=+0.183401234 container start ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:17:42 compute-0 podman[215558]: 2026-01-20 19:17:42.539196369 +0000 UTC m=+0.187319852 container attach ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 19:17:43 compute-0 lvm[215654]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:17:43 compute-0 lvm[215654]: VG ceph_vg1 finished
Jan 20 19:17:43 compute-0 lvm[215653]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:17:43 compute-0 lvm[215653]: VG ceph_vg0 finished
Jan 20 19:17:43 compute-0 lvm[215656]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:17:43 compute-0 lvm[215656]: VG ceph_vg2 finished
Jan 20 19:17:43 compute-0 angry_elgamal[215575]: {}
Jan 20 19:17:43 compute-0 systemd[1]: libpod-ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d.scope: Deactivated successfully.
Jan 20 19:17:43 compute-0 systemd[1]: libpod-ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d.scope: Consumed 1.317s CPU time.
Jan 20 19:17:43 compute-0 podman[215558]: 2026-01-20 19:17:43.331712484 +0000 UTC m=+0.979835967 container died ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:17:43 compute-0 systemd[1]: var-lib-containers-storage-overlay-eb4504fda14ffdbd1d3ffc32715d62bb8ca309757e7e61546ea7c07f34e5c49f-merged.mount: Deactivated successfully.
Jan 20 19:17:43 compute-0 podman[215558]: 2026-01-20 19:17:43.397719517 +0000 UTC m=+1.045843040 container remove ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_elgamal, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:17:43 compute-0 systemd[1]: libpod-conmon-ca1717c26d8e4be9adb71b7306fc42d737e90a35c086507d2d837e337a55b49d.scope: Deactivated successfully.
Jan 20 19:17:43 compute-0 sudo[215482]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:17:43 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:17:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:17:43 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:17:43 compute-0 sudo[215673]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:17:43 compute-0 sudo[215673]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:17:43 compute-0 sudo[215673]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:43 compute-0 sudo[215084]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:44 compute-0 sudo[215847]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfksuefzlryjswiwdiskjkfyexopdjxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936664.0602908-54-88447130842896/AnsiballZ_stat.py'
Jan 20 19:17:44 compute-0 sudo[215847]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:17:44 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:17:44 compute-0 ceph-mon[75120]: pgmap v569: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:17:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:17:44 compute-0 python3.9[215849]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:17:44 compute-0 sudo[215847]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:45 compute-0 sudo[215999]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-euliaugcweyeobwluwbogaftflkdwhek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936664.8549027-64-175552809519580/AnsiballZ_command.py'
Jan 20 19:17:45 compute-0 sudo[215999]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:45 compute-0 python3.9[216001]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:17:45 compute-0 sudo[215999]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:45 compute-0 sudo[216152]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tomumerdgyubypohtxqignqgukdvsacv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936665.659348-74-252551090268599/AnsiballZ_stat.py'
Jan 20 19:17:45 compute-0 sudo[216152]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:45 compute-0 ceph-mon[75120]: pgmap v570: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:46 compute-0 python3.9[216154]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:17:46 compute-0 sudo[216152]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:46 compute-0 sudo[216304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpjwtdsneojraumvblaarkimbvsqupso ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936666.2639573-82-43093737670181/AnsiballZ_command.py'
Jan 20 19:17:46 compute-0 sudo[216304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:46 compute-0 python3.9[216306]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:17:46 compute-0 sudo[216304]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:47 compute-0 sudo[216457]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aklzaijrydimxzphcyjrmsosqaeundhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936666.852145-90-16826186677517/AnsiballZ_stat.py'
Jan 20 19:17:47 compute-0 sudo[216457]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:47 compute-0 python3.9[216459]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:17:47 compute-0 sudo[216457]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:47 compute-0 sudo[216580]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapshepcsyngxziupppktmiysvtlzajj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936666.852145-90-16826186677517/AnsiballZ_copy.py'
Jan 20 19:17:47 compute-0 sudo[216580]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:48 compute-0 ceph-mon[75120]: pgmap v571: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:48 compute-0 python3.9[216582]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936666.852145-90-16826186677517/.source.iscsi _original_basename=.u97w0j4o follow=False checksum=9c63e5636d3dd22e5337afde50d813d21294a1dd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:48 compute-0 sudo[216580]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:48 compute-0 sudo[216732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sobcnzbfgycvwuplcwbeorwkfzpovyjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936668.238122-105-265063449285175/AnsiballZ_file.py'
Jan 20 19:17:48 compute-0 sudo[216732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:48 compute-0 python3.9[216734]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:48 compute-0 sudo[216732]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:49 compute-0 sudo[216884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goobxypffrddkzgbkjdatficpmttjqih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936669.1347961-113-23105775253325/AnsiballZ_lineinfile.py'
Jan 20 19:17:49 compute-0 sudo[216884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:49 compute-0 python3.9[216886]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:17:49 compute-0 sudo[216884]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:50 compute-0 ceph-mon[75120]: pgmap v572: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:50 compute-0 sudo[217036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yisinzyfiwlwfxmjzzyzhmkfgiobzswp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936670.0113206-122-131414896429543/AnsiballZ_systemd_service.py'
Jan 20 19:17:50 compute-0 sudo[217036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:50 compute-0 python3.9[217038]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:17:50 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 20 19:17:51 compute-0 sudo[217036]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:51 compute-0 sudo[217192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yttvjypkkidhmbvfcgzumhalcyhyqpxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936671.1715488-130-251699451817683/AnsiballZ_systemd_service.py'
Jan 20 19:17:51 compute-0 sudo[217192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:51 compute-0 python3.9[217194]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:17:51 compute-0 systemd[1]: Reloading.
Jan 20 19:17:51 compute-0 systemd-rc-local-generator[217221]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:17:51 compute-0 systemd-sysv-generator[217225]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:17:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:52 compute-0 ceph-mon[75120]: pgmap v573: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:52 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 19:17:52 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 20 19:17:52 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 20 19:17:52 compute-0 systemd[1]: Started Open-iSCSI.
Jan 20 19:17:52 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 20 19:17:52 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 20 19:17:52 compute-0 sudo[217192]: pam_unix(sudo:session): session closed for user root
Jan 20 19:17:53 compute-0 python3.9[217392]: ansible-ansible.builtin.service_facts Invoked
Jan 20 19:17:53 compute-0 network[217409]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 19:17:53 compute-0 network[217410]: 'network-scripts' will be removed from distribution in near future.
Jan 20 19:17:53 compute-0 network[217411]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 19:17:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:54 compute-0 ceph-mon[75120]: pgmap v574: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:56 compute-0 ceph-mon[75120]: pgmap v575: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:56 compute-0 sudo[217681]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxowhkjgkgetadgeacybunuzflfmmaqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936676.5422843-153-45208213875050/AnsiballZ_dnf.py'
Jan 20 19:17:56 compute-0 sudo[217681]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:17:57 compute-0 python3.9[217683]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:17:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:58 compute-0 ceph-mon[75120]: pgmap v576: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:17:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:17:59 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 19:17:59 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 19:17:59 compute-0 systemd[1]: Reloading.
Jan 20 19:17:59 compute-0 systemd-rc-local-generator[217728]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:17:59 compute-0 systemd-sysv-generator[217732]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:17:59 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 19:17:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:00 compute-0 ceph-mon[75120]: pgmap v577: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:00 compute-0 sudo[217681]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:00 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 19:18:00 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 19:18:00 compute-0 systemd[1]: run-re6527665684b4febacf460e10aa073fe.service: Deactivated successfully.
Jan 20 19:18:00 compute-0 sudo[217996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xsaccrnxgypqwxrwygetnjicbzuzxqfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936680.483153-162-97686390357267/AnsiballZ_file.py'
Jan 20 19:18:00 compute-0 sudo[217996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:00 compute-0 python3.9[217998]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 19:18:00 compute-0 sudo[217996]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:01 compute-0 sudo[218148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfmgrnlbwnzdiqnsftlyroispgwulits ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936681.1454427-170-232446108710564/AnsiballZ_modprobe.py'
Jan 20 19:18:01 compute-0 sudo[218148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:01 compute-0 python3.9[218150]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 20 19:18:01 compute-0 sudo[218148]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:02 compute-0 ceph-mon[75120]: pgmap v578: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:02 compute-0 sudo[218304]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmbcamvdtosucztclwluhrbfhfmlqzzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936681.9612522-178-250805566780138/AnsiballZ_stat.py'
Jan 20 19:18:02 compute-0 sudo[218304]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:02 compute-0 python3.9[218306]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:18:02 compute-0 sudo[218304]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:02 compute-0 sudo[218427]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akxvkjxcaahdkxoybtoifvzbajngfffn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936681.9612522-178-250805566780138/AnsiballZ_copy.py'
Jan 20 19:18:02 compute-0 sudo[218427]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:02 compute-0 python3.9[218429]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936681.9612522-178-250805566780138/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:02 compute-0 sudo[218427]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:03 compute-0 sudo[218579]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krpllxcpjolqdmblfdnsxxqmlxbzgenc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936683.1940336-194-247346297725645/AnsiballZ_lineinfile.py'
Jan 20 19:18:03 compute-0 sudo[218579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:03 compute-0 python3.9[218581]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:03 compute-0 sudo[218579]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:04 compute-0 ceph-mon[75120]: pgmap v579: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:04 compute-0 sudo[218731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjkwmwvhaezjqzjedhjgacwhowcloybp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936683.7789104-202-68408339700543/AnsiballZ_systemd.py'
Jan 20 19:18:04 compute-0 sudo[218731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:04 compute-0 python3.9[218733]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:18:04 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 19:18:04 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 20 19:18:04 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 20 19:18:04 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 19:18:04 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 19:18:04 compute-0 sudo[218731]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:05 compute-0 sudo[218887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkdnfjdiyehlkshhwbjlfshynncsdaga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936684.9323432-210-207272850507384/AnsiballZ_command.py'
Jan 20 19:18:05 compute-0 sudo[218887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:05 compute-0 python3.9[218889]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:05 compute-0 sudo[218887]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:18:05.443 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:18:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:18:05.444 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:18:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:18:05.444 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:18:05 compute-0 sudo[219051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jeybcqhlheeoyltjelaozkgjdbqoyida ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936685.6157854-220-64760744801854/AnsiballZ_stat.py'
Jan 20 19:18:05 compute-0 sudo[219051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:05 compute-0 podman[219014]: 2026-01-20 19:18:05.93343022 +0000 UTC m=+0.086465436 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:18:06 compute-0 ceph-mon[75120]: pgmap v580: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:06 compute-0 python3.9[219059]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:18:06 compute-0 sudo[219051]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:06 compute-0 sudo[219216]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qdwnnptqyaykfzogivsywhextisajuce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936686.3102148-229-195458465676049/AnsiballZ_stat.py'
Jan 20 19:18:06 compute-0 sudo[219216]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:06 compute-0 python3.9[219218]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:18:06 compute-0 sudo[219216]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:07 compute-0 sudo[219339]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzlbkxdobqzmwpmojrjjngzdfujrqtvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936686.3102148-229-195458465676049/AnsiballZ_copy.py'
Jan 20 19:18:07 compute-0 sudo[219339]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:07 compute-0 python3.9[219341]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936686.3102148-229-195458465676049/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:07 compute-0 sudo[219339]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:07 compute-0 podman[219465]: 2026-01-20 19:18:07.652116208 +0000 UTC m=+0.051791548 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Jan 20 19:18:07 compute-0 sudo[219508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgynrxkkjiywgnnnicsvohgonecgwfaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936687.390359-244-111917584172520/AnsiballZ_command.py'
Jan 20 19:18:07 compute-0 sudo[219508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:07 compute-0 python3.9[219512]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:07 compute-0 sudo[219508]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:08 compute-0 ceph-mon[75120]: pgmap v581: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:08 compute-0 sudo[219663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcnjhtzijgqqcfbmnukwdvdpdvmkizum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936688.0141568-252-90087071138610/AnsiballZ_lineinfile.py'
Jan 20 19:18:08 compute-0 sudo[219663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:08 compute-0 python3.9[219665]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:08 compute-0 sudo[219663]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:09 compute-0 sudo[219815]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skljnurzjlumojfaznyaouxghcsdmcoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936688.6568642-260-18671499843574/AnsiballZ_replace.py'
Jan 20 19:18:09 compute-0 sudo[219815]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:09 compute-0 python3.9[219817]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:09 compute-0 sudo[219815]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:09 compute-0 sudo[219967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqzwlwwjkfdipvftfjihbtbexfpgdufp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936689.3831365-268-265486133990273/AnsiballZ_replace.py'
Jan 20 19:18:09 compute-0 sudo[219967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:09 compute-0 python3.9[219969]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:09 compute-0 sudo[219967]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:10 compute-0 ceph-mon[75120]: pgmap v582: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:10 compute-0 sudo[220119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bltnscldwrgxxvtpitlzfnsjadkdtgea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936689.9989483-277-127116592377291/AnsiballZ_lineinfile.py'
Jan 20 19:18:10 compute-0 sudo[220119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:10 compute-0 python3.9[220121]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:10 compute-0 sudo[220119]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:10 compute-0 sudo[220271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unojcgujqfyosgjbshnqcefvurgroihb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936690.5648263-277-270155776504446/AnsiballZ_lineinfile.py'
Jan 20 19:18:10 compute-0 sudo[220271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:10 compute-0 python3.9[220273]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:10 compute-0 sudo[220271]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:11 compute-0 sudo[220423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sktezrlgfppghshsubzqstaykxvljeiw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936691.0976052-277-104842891313990/AnsiballZ_lineinfile.py'
Jan 20 19:18:11 compute-0 sudo[220423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:11 compute-0 python3.9[220425]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:11 compute-0 sudo[220423]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:11 compute-0 sudo[220575]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aezhmhsfvwbsjczpaewllqulebnoegpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936691.6613398-277-39194939591089/AnsiballZ_lineinfile.py'
Jan 20 19:18:11 compute-0 sudo[220575]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:12 compute-0 ceph-mon[75120]: pgmap v583: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:12 compute-0 python3.9[220577]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:12 compute-0 sudo[220575]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:12 compute-0 sudo[220727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whvfatxsozhqzxpvkqoaebuubfitxamh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936692.2702618-306-17219723308468/AnsiballZ_stat.py'
Jan 20 19:18:12 compute-0 sudo[220727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:12 compute-0 python3.9[220729]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:18:12 compute-0 sudo[220727]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:13 compute-0 sudo[220881]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htcalcqpvvrlehioscqymmyjiztvwwsz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936692.8530471-314-54603376257737/AnsiballZ_command.py'
Jan 20 19:18:13 compute-0 sudo[220881]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:13 compute-0 python3.9[220883]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:13 compute-0 sudo[220881]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:13 compute-0 sudo[221034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvjwabsnbcdkdllcrotgugejmfbiwnkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936693.5090206-323-77308876512881/AnsiballZ_systemd_service.py'
Jan 20 19:18:13 compute-0 sudo[221034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:14 compute-0 ceph-mon[75120]: pgmap v584: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:14 compute-0 python3.9[221036]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:14 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 20 19:18:14 compute-0 sudo[221034]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:14 compute-0 sudo[221190]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qumqrvkpajskebqekswphgvdrslbwfnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936694.283065-331-60805462116973/AnsiballZ_systemd_service.py'
Jan 20 19:18:14 compute-0 sudo[221190]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:14 compute-0 python3.9[221192]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:14 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 20 19:18:14 compute-0 udevadm[221197]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 20 19:18:14 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 20 19:18:14 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 19:18:15 compute-0 multipathd[221200]: --------start up--------
Jan 20 19:18:15 compute-0 multipathd[221200]: read /etc/multipath.conf
Jan 20 19:18:15 compute-0 multipathd[221200]: path checkers start up
Jan 20 19:18:15 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 19:18:15 compute-0 sudo[221190]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:15 compute-0 sudo[221357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jetiadawaeclajxoklgdvhsuoeclltbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936695.4040575-343-31833470239025/AnsiballZ_file.py'
Jan 20 19:18:15 compute-0 sudo[221357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:15 compute-0 python3.9[221359]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 20 19:18:15 compute-0 sudo[221357]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:16 compute-0 ceph-mon[75120]: pgmap v585: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:16 compute-0 sudo[221509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suaveckeadbxpftsohhtlvzgphtyeqgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936696.0645528-351-235172911484948/AnsiballZ_modprobe.py'
Jan 20 19:18:16 compute-0 sudo[221509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:16 compute-0 python3.9[221511]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 20 19:18:16 compute-0 kernel: Key type psk registered
Jan 20 19:18:16 compute-0 sudo[221509]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:17 compute-0 sudo[221672]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjoufwtgsvcdwfwsjehwxdekftgunrgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936697.0473604-359-72893046642656/AnsiballZ_stat.py'
Jan 20 19:18:17 compute-0 sudo[221672]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:17 compute-0 python3.9[221674]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:18:17 compute-0 sudo[221672]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:17 compute-0 sudo[221795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmixiauttjzyblzocnpuzzldabqqgzau ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936697.0473604-359-72893046642656/AnsiballZ_copy.py'
Jan 20 19:18:17 compute-0 sudo[221795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:18 compute-0 ceph-mon[75120]: pgmap v586: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:18 compute-0 python3.9[221797]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1768936697.0473604-359-72893046642656/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:18 compute-0 sudo[221795]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:18 compute-0 sudo[221947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sksigttxqcbrqaagjmhaogixjfderdni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936698.2601297-375-272548705391745/AnsiballZ_lineinfile.py'
Jan 20 19:18:18 compute-0 sudo[221947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:18 compute-0 python3.9[221949]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:18 compute-0 sudo[221947]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:19 compute-0 sudo[222099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmcpwdevbiwjegkoxcklruldxvlannby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936698.9024255-383-187534030544439/AnsiballZ_systemd.py'
Jan 20 19:18:19 compute-0 sudo[222099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:19 compute-0 python3.9[222101]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:18:19 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 20 19:18:19 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 20 19:18:19 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 20 19:18:19 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 20 19:18:19 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 20 19:18:19 compute-0 sudo[222099]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:20 compute-0 ceph-mon[75120]: pgmap v587: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:20 compute-0 sudo[222255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdvqjsnciyvetkiorolftqwifkmfllgz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936699.8332386-391-229939904549824/AnsiballZ_dnf.py'
Jan 20 19:18:20 compute-0 sudo[222255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:20 compute-0 python3.9[222257]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 20 19:18:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:22 compute-0 ceph-mon[75120]: pgmap v588: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:23 compute-0 systemd[1]: Reloading.
Jan 20 19:18:23 compute-0 systemd-sysv-generator[222294]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:18:23 compute-0 systemd-rc-local-generator[222291]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:18:23 compute-0 systemd[1]: Reloading.
Jan 20 19:18:23 compute-0 systemd-sysv-generator[222329]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:18:23 compute-0 systemd-rc-local-generator[222325]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:18:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:24 compute-0 ceph-mon[75120]: pgmap v589: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:24 compute-0 systemd-logind[797]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 20 19:18:24 compute-0 systemd-logind[797]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 20 19:18:24 compute-0 lvm[222373]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:18:24 compute-0 lvm[222373]: VG ceph_vg1 finished
Jan 20 19:18:24 compute-0 lvm[222372]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:18:24 compute-0 lvm[222372]: VG ceph_vg0 finished
Jan 20 19:18:24 compute-0 lvm[222376]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:18:24 compute-0 lvm[222376]: VG ceph_vg2 finished
Jan 20 19:18:24 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 20 19:18:24 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 20 19:18:24 compute-0 systemd[1]: Reloading.
Jan 20 19:18:24 compute-0 systemd-sysv-generator[222433]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:18:24 compute-0 systemd-rc-local-generator[222429]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:18:24 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 20 19:18:25 compute-0 sudo[222255]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:25 compute-0 sudo[223727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dczbkobtqiljrppqyfbqtlyvgyjfobnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936705.3854835-399-198007636560483/AnsiballZ_systemd_service.py'
Jan 20 19:18:25 compute-0 sudo[223727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:25 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 20 19:18:25 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 20 19:18:25 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.440s CPU time.
Jan 20 19:18:25 compute-0 systemd[1]: run-r065edafabcfc432aa9bcca6f3d25b4af.service: Deactivated successfully.
Jan 20 19:18:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:25 compute-0 python3.9[223729]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:18:25 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 20 19:18:25 compute-0 iscsid[217234]: iscsid shutting down.
Jan 20 19:18:25 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 20 19:18:25 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 20 19:18:25 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 20 19:18:25 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 20 19:18:25 compute-0 systemd[1]: Started Open-iSCSI.
Jan 20 19:18:26 compute-0 ceph-mon[75120]: pgmap v590: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:26 compute-0 sudo[223727]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:26 compute-0 sudo[223884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txmaplphkkdnjwubbdbfowjwwkrhnanw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936706.1725276-407-258471930600241/AnsiballZ_systemd_service.py'
Jan 20 19:18:26 compute-0 sudo[223884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:26 compute-0 python3.9[223886]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:18:26 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 20 19:18:26 compute-0 multipathd[221200]: exit (signal)
Jan 20 19:18:26 compute-0 multipathd[221200]: --------shut down-------
Jan 20 19:18:26 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 20 19:18:26 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 20 19:18:26 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 20 19:18:26 compute-0 multipathd[223893]: --------start up--------
Jan 20 19:18:26 compute-0 multipathd[223893]: read /etc/multipath.conf
Jan 20 19:18:26 compute-0 multipathd[223893]: path checkers start up
Jan 20 19:18:26 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 20 19:18:26 compute-0 sudo[223884]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:27 compute-0 python3.9[224050]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 20 19:18:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:28 compute-0 ceph-mon[75120]: pgmap v591: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:28 compute-0 sudo[224204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqrqkfgmbjwrtztvieowgzmwkiksvvrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936707.992366-425-76756987783640/AnsiballZ_file.py'
Jan 20 19:18:28 compute-0 sudo[224204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:28 compute-0 python3.9[224206]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:28 compute-0 sudo[224204]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:28 compute-0 sudo[224356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avxrftkffvppnonrstgueukxcuxgchdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936708.7194633-436-253718638546870/AnsiballZ_systemd_service.py'
Jan 20 19:18:28 compute-0 sudo[224356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:29 compute-0 python3.9[224358]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:18:29 compute-0 systemd[1]: Reloading.
Jan 20 19:18:29 compute-0 systemd-rc-local-generator[224384]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:18:29 compute-0 systemd-sysv-generator[224388]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:18:29 compute-0 sudo[224356]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:30 compute-0 ceph-mon[75120]: pgmap v592: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:30 compute-0 python3.9[224542]: ansible-ansible.builtin.service_facts Invoked
Jan 20 19:18:30 compute-0 network[224559]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 20 19:18:30 compute-0 network[224560]: 'network-scripts' will be removed from distribution in near future.
Jan 20 19:18:30 compute-0 network[224561]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 20 19:18:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:18:31
Jan 20 19:18:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:18:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:18:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['backups', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta']
Jan 20 19:18:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:18:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:32 compute-0 ceph-mon[75120]: pgmap v593: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:33 compute-0 sudo[224832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cylkjjmkyjechlpngejenwbceeuztgnk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936713.4327936-455-244656201103668/AnsiballZ_systemd_service.py'
Jan 20 19:18:33 compute-0 sudo[224832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:33 compute-0 python3.9[224834]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:34 compute-0 ceph-mon[75120]: pgmap v594: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:34 compute-0 sudo[224832]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:34 compute-0 sudo[224985]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rruvcamgpxezulhxzfyjsacjnayplkea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936714.1282306-455-75492916454648/AnsiballZ_systemd_service.py'
Jan 20 19:18:34 compute-0 sudo[224985]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:18:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:18:34 compute-0 python3.9[224987]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:34 compute-0 sudo[224985]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:35 compute-0 sudo[225138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mjftuzrjnegutlkkbshqnanctxcnserm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936714.8010838-455-110999994743147/AnsiballZ_systemd_service.py'
Jan 20 19:18:35 compute-0 sudo[225138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:35 compute-0 python3.9[225140]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:35 compute-0 sudo[225138]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:35 compute-0 sudo[225291]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkusbgqktndwixkfpenarnbsxzjtwtuh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936715.5402606-455-75589006951209/AnsiballZ_systemd_service.py'
Jan 20 19:18:35 compute-0 sudo[225291]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:36 compute-0 ceph-mon[75120]: pgmap v595: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:36 compute-0 python3.9[225293]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:36 compute-0 sudo[225291]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:36 compute-0 podman[225295]: 2026-01-20 19:18:36.222616906 +0000 UTC m=+0.094780192 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 20 19:18:36 compute-0 sudo[225470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyaurazzwkdfjvkugiyfoqjwsiokrbmw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936716.2824078-455-55543950051486/AnsiballZ_systemd_service.py'
Jan 20 19:18:36 compute-0 sudo[225470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:36 compute-0 python3.9[225472]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:36 compute-0 sudo[225470]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:37 compute-0 sudo[225623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyczjmymdhnfxoncklughpirmzaywkqh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936716.987563-455-36680523774590/AnsiballZ_systemd_service.py'
Jan 20 19:18:37 compute-0 sudo[225623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:37 compute-0 python3.9[225625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:37 compute-0 sudo[225623]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:37 compute-0 sudo[225787]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-goqretrymautlzhoiqiowszzgyndjadp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936717.6695106-455-78816652296311/AnsiballZ_systemd_service.py'
Jan 20 19:18:37 compute-0 sudo[225787]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:37 compute-0 podman[225750]: 2026-01-20 19:18:37.942591921 +0000 UTC m=+0.044759097 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 20 19:18:38 compute-0 ceph-mon[75120]: pgmap v596: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:38 compute-0 python3.9[225796]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:38 compute-0 sudo[225787]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:38 compute-0 sudo[225947]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gigtjgpiuwzdismxiwwrsvlzjvvsinbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936718.3636122-455-156803253422685/AnsiballZ_systemd_service.py'
Jan 20 19:18:38 compute-0 sudo[225947]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:38 compute-0 python3.9[225949]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:18:38 compute-0 sudo[225947]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:39 compute-0 sudo[226100]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ozdyccltzgkoxydbmcobcpixkqoonicn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936719.2153373-514-105624582322996/AnsiballZ_file.py'
Jan 20 19:18:39 compute-0 sudo[226100]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:39 compute-0 python3.9[226102]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:39 compute-0 sudo[226100]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:40 compute-0 ceph-mon[75120]: pgmap v597: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:40 compute-0 sudo[226252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyydrwhxfvbybsgupqqgbmlgeqzyawsh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936719.781648-514-148414417419556/AnsiballZ_file.py'
Jan 20 19:18:40 compute-0 sudo[226252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:40 compute-0 python3.9[226254]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:40 compute-0 sudo[226252]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:40 compute-0 sudo[226404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hywvomwcqsxptnwrwzeoybsifptynqdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936720.3434694-514-63171413951553/AnsiballZ_file.py'
Jan 20 19:18:40 compute-0 sudo[226404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:40 compute-0 python3.9[226406]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:40 compute-0 sudo[226404]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:41 compute-0 sudo[226556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcpzftupmleozpthluzwqcqrolrzyndv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936720.8974497-514-249275611374638/AnsiballZ_file.py'
Jan 20 19:18:41 compute-0 sudo[226556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:41 compute-0 python3.9[226558]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:41 compute-0 sudo[226556]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:41 compute-0 sudo[226708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-senvznacmiawlszqdeenxxgwtufcayjf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936721.5200808-514-118035178111551/AnsiballZ_file.py'
Jan 20 19:18:41 compute-0 sudo[226708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:41 compute-0 python3.9[226710]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:41 compute-0 sudo[226708]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:42 compute-0 ceph-mon[75120]: pgmap v598: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:42 compute-0 sudo[226860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eevchhmvxcoimbpiouyawpvrvmdjmttk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936722.101929-514-83088434134114/AnsiballZ_file.py'
Jan 20 19:18:42 compute-0 sudo[226860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:42 compute-0 python3.9[226862]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:42 compute-0 sudo[226860]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:42 compute-0 sudo[227012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upaowrioztptrdwfrolrjpomrofcjfbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936722.656966-514-138882190714862/AnsiballZ_file.py'
Jan 20 19:18:42 compute-0 sudo[227012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:43 compute-0 python3.9[227014]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:43 compute-0 sudo[227012]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:43 compute-0 sudo[227164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umvepphbbkniodejbaztgixfwtbfvvey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936723.2522793-514-217942076630424/AnsiballZ_file.py'
Jan 20 19:18:43 compute-0 sudo[227164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:43 compute-0 sudo[227167]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:43 compute-0 sudo[227167]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:43 compute-0 sudo[227167]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:43 compute-0 sudo[227192]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:18:43 compute-0 sudo[227192]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:43 compute-0 python3.9[227166]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:43 compute-0 sudo[227164]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:44 compute-0 ceph-mon[75120]: pgmap v599: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:44 compute-0 sudo[227392]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqvcjwwfrshpipifvetiziqmqbgrreeb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936723.8999934-571-248262393159633/AnsiballZ_file.py'
Jan 20 19:18:44 compute-0 sudo[227392]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:44 compute-0 sudo[227192]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:18:44 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:18:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:18:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:18:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:18:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:18:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:18:44 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:18:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:18:44 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:18:44 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:18:44 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:18:44 compute-0 sudo[227401]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:44 compute-0 sudo[227401]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:44 compute-0 sudo[227401]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:44 compute-0 sudo[227426]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:18:44 compute-0 sudo[227426]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:44 compute-0 python3.9[227400]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:44 compute-0 sudo[227392]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:18:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:18:44 compute-0 podman[227546]: 2026-01-20 19:18:44.69559193 +0000 UTC m=+0.072459670 container create e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:18:44 compute-0 podman[227546]: 2026-01-20 19:18:44.648907346 +0000 UTC m=+0.025775116 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:18:44 compute-0 systemd[1]: Started libpod-conmon-e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383.scope.
Jan 20 19:18:44 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:44 compute-0 podman[227546]: 2026-01-20 19:18:44.79331389 +0000 UTC m=+0.170181630 container init e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:18:44 compute-0 podman[227546]: 2026-01-20 19:18:44.802530644 +0000 UTC m=+0.179398394 container start e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:18:44 compute-0 podman[227546]: 2026-01-20 19:18:44.805938597 +0000 UTC m=+0.182806337 container attach e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 20 19:18:44 compute-0 romantic_bell[227603]: 167 167
Jan 20 19:18:44 compute-0 systemd[1]: libpod-e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383.scope: Deactivated successfully.
Jan 20 19:18:44 compute-0 podman[227546]: 2026-01-20 19:18:44.808283394 +0000 UTC m=+0.185151134 container died e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, io.buildah.version=1.41.3, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2)
Jan 20 19:18:44 compute-0 sudo[227632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uqbpmarywnjuvlkhmcmutykebemnlldr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936724.5134616-571-250577879970020/AnsiballZ_file.py'
Jan 20 19:18:44 compute-0 sudo[227632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-feba32426eebee1680dcc8256bdd3c8a08980552317a8a774c8385934d18c6b4-merged.mount: Deactivated successfully.
Jan 20 19:18:44 compute-0 podman[227546]: 2026-01-20 19:18:44.84479872 +0000 UTC m=+0.221666460 container remove e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=romantic_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:18:44 compute-0 systemd[1]: libpod-conmon-e4c0bcadb05775de57cace8ddb0fb8af82751bba3bc67bf15b654cd29b354383.scope: Deactivated successfully.
Jan 20 19:18:45 compute-0 podman[227655]: 2026-01-20 19:18:45.002414066 +0000 UTC m=+0.041652532 container create 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:18:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:18:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:18:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:18:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:18:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:18:45 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:18:45 compute-0 python3.9[227637]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:45 compute-0 systemd[1]: Started libpod-conmon-31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd.scope.
Jan 20 19:18:45 compute-0 sudo[227632]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:45 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:45 compute-0 podman[227655]: 2026-01-20 19:18:44.98446534 +0000 UTC m=+0.023703836 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:18:45 compute-0 podman[227655]: 2026-01-20 19:18:45.128421794 +0000 UTC m=+0.167660290 container init 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 19:18:45 compute-0 podman[227655]: 2026-01-20 19:18:45.135721111 +0000 UTC m=+0.174959577 container start 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:18:45 compute-0 podman[227655]: 2026-01-20 19:18:45.143465628 +0000 UTC m=+0.182704114 container attach 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 19:18:45 compute-0 sudo[227833]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utsuxaogdbrpxcolmoalfmndsoezuvcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936725.1764357-571-93788090845872/AnsiballZ_file.py'
Jan 20 19:18:45 compute-0 sudo[227833]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:45 compute-0 vigorous_williams[227672]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:18:45 compute-0 vigorous_williams[227672]: --> All data devices are unavailable
Jan 20 19:18:45 compute-0 systemd[1]: libpod-31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd.scope: Deactivated successfully.
Jan 20 19:18:45 compute-0 podman[227655]: 2026-01-20 19:18:45.58888927 +0000 UTC m=+0.628127726 container died 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:18:45 compute-0 python3.9[227835]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:45 compute-0 systemd[1]: var-lib-containers-storage-overlay-60b0cef9f0c8630ccdf97af5aeee92d4860094c48a623fa034b4a49fe9dfcf04-merged.mount: Deactivated successfully.
Jan 20 19:18:45 compute-0 sudo[227833]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:45 compute-0 podman[227655]: 2026-01-20 19:18:45.637742035 +0000 UTC m=+0.676980501 container remove 31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vigorous_williams, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 19:18:45 compute-0 systemd[1]: libpod-conmon-31f32d54f0f6d527f26729ba326d4e78b79bf5000026f8e05cae8e7fd217dcfd.scope: Deactivated successfully.
Jan 20 19:18:45 compute-0 sudo[227426]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:45 compute-0 sudo[227874]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:45 compute-0 sudo[227874]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:45 compute-0 sudo[227874]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:45 compute-0 sudo[227913]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:18:45 compute-0 sudo[227913]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:46 compute-0 sshd-session[227854]: Invalid user ubuntu from 45.148.10.240 port 55626
Jan 20 19:18:46 compute-0 sudo[228057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkhekdfljhtqrvaedscdvnwkdzjohhcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936725.7650824-571-239785752692292/AnsiballZ_file.py'
Jan 20 19:18:46 compute-0 sudo[228057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:46 compute-0 ceph-mon[75120]: pgmap v600: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:46 compute-0 podman[228069]: 2026-01-20 19:18:46.076535755 +0000 UTC m=+0.035486902 container create 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:18:46 compute-0 sshd-session[227854]: Connection closed by invalid user ubuntu 45.148.10.240 port 55626 [preauth]
Jan 20 19:18:46 compute-0 systemd[1]: Started libpod-conmon-9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196.scope.
Jan 20 19:18:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:46 compute-0 podman[228069]: 2026-01-20 19:18:46.155309597 +0000 UTC m=+0.114260764 container init 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 19:18:46 compute-0 podman[228069]: 2026-01-20 19:18:46.060587958 +0000 UTC m=+0.019539135 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:18:46 compute-0 podman[228069]: 2026-01-20 19:18:46.16325933 +0000 UTC m=+0.122210477 container start 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:18:46 compute-0 serene_cannon[228085]: 167 167
Jan 20 19:18:46 compute-0 systemd[1]: libpod-9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196.scope: Deactivated successfully.
Jan 20 19:18:46 compute-0 podman[228069]: 2026-01-20 19:18:46.166658892 +0000 UTC m=+0.125610069 container attach 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030)
Jan 20 19:18:46 compute-0 podman[228069]: 2026-01-20 19:18:46.168503528 +0000 UTC m=+0.127454675 container died 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-481fee7a42902d0c1d32df4d099d33c889aee2c447986db47bbb401bb9e901a6-merged.mount: Deactivated successfully.
Jan 20 19:18:46 compute-0 podman[228069]: 2026-01-20 19:18:46.205085945 +0000 UTC m=+0.164037092 container remove 9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:18:46 compute-0 systemd[1]: libpod-conmon-9f8a9eedee949f38aa0dce048bf28bcdb0a0986f3c9d2e71c959228b7ee24196.scope: Deactivated successfully.
Jan 20 19:18:46 compute-0 python3.9[228068]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:46 compute-0 sudo[228057]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:46 compute-0 podman[228133]: 2026-01-20 19:18:46.364059193 +0000 UTC m=+0.040667568 container create 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:18:46 compute-0 systemd[1]: Started libpod-conmon-8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e.scope.
Jan 20 19:18:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:46 compute-0 podman[228133]: 2026-01-20 19:18:46.347683216 +0000 UTC m=+0.024291621 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:18:46 compute-0 podman[228133]: 2026-01-20 19:18:46.455179695 +0000 UTC m=+0.131788090 container init 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:18:46 compute-0 podman[228133]: 2026-01-20 19:18:46.463378274 +0000 UTC m=+0.139986649 container start 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:18:46 compute-0 podman[228133]: 2026-01-20 19:18:46.466798287 +0000 UTC m=+0.143406662 container attach 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:18:46 compute-0 sudo[228280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsijspumtvvnlewqfatcyocansxuuwur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936726.3677392-571-127529430577432/AnsiballZ_file.py'
Jan 20 19:18:46 compute-0 sudo[228280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]: {
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:     "0": [
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:         {
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "devices": [
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "/dev/loop3"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             ],
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_name": "ceph_lv0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_size": "21470642176",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "name": "ceph_lv0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "tags": {
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cluster_name": "ceph",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.crush_device_class": "",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.encrypted": "0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.objectstore": "bluestore",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osd_id": "0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.type": "block",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.vdo": "0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.with_tpm": "0"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             },
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "type": "block",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "vg_name": "ceph_vg0"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:         }
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:     ],
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:     "1": [
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:         {
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "devices": [
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "/dev/loop4"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             ],
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_name": "ceph_lv1",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_size": "21470642176",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "name": "ceph_lv1",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "tags": {
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cluster_name": "ceph",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.crush_device_class": "",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.encrypted": "0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.objectstore": "bluestore",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osd_id": "1",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.type": "block",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.vdo": "0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.with_tpm": "0"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             },
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "type": "block",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "vg_name": "ceph_vg1"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:         }
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:     ],
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:     "2": [
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:         {
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "devices": [
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "/dev/loop5"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             ],
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_name": "ceph_lv2",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_size": "21470642176",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "name": "ceph_lv2",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "tags": {
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.cluster_name": "ceph",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.crush_device_class": "",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.encrypted": "0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.objectstore": "bluestore",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osd_id": "2",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.type": "block",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.vdo": "0",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:                 "ceph.with_tpm": "0"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             },
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "type": "block",
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:             "vg_name": "ceph_vg2"
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:         }
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]:     ]
Jan 20 19:18:46 compute-0 upbeat_rosalind[228194]: }
Jan 20 19:18:46 compute-0 systemd[1]: libpod-8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e.scope: Deactivated successfully.
Jan 20 19:18:46 compute-0 conmon[228194]: conmon 8c4d18cbc2f1609c83e0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e.scope/container/memory.events
Jan 20 19:18:46 compute-0 podman[228133]: 2026-01-20 19:18:46.756400615 +0000 UTC m=+0.433008990 container died 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:18:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-e17bfc67ed83fc6dd79cf207699eaff019df5b360aa85f6ed4fc6edf7821d985-merged.mount: Deactivated successfully.
Jan 20 19:18:46 compute-0 python3.9[228282]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:46 compute-0 podman[228133]: 2026-01-20 19:18:46.807712561 +0000 UTC m=+0.484320936 container remove 8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=upbeat_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS)
Jan 20 19:18:46 compute-0 sudo[228280]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:46 compute-0 systemd[1]: libpod-conmon-8c4d18cbc2f1609c83e022f15703a37da096a3f2ea48789b76a94c84178d1c7e.scope: Deactivated successfully.
Jan 20 19:18:46 compute-0 sudo[227913]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:46 compute-0 sudo[228317]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:18:46 compute-0 sudo[228317]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:46 compute-0 sudo[228317]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:46 compute-0 sudo[228352]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:18:46 compute-0 sudo[228352]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:47 compute-0 podman[228487]: 2026-01-20 19:18:47.234400857 +0000 UTC m=+0.039387786 container create 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:18:47 compute-0 sudo[228522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wapfmgvasiikwctrsxdgvsrulizikktj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936726.954448-571-200857693979086/AnsiballZ_file.py'
Jan 20 19:18:47 compute-0 sudo[228522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:47 compute-0 systemd[1]: Started libpod-conmon-6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027.scope.
Jan 20 19:18:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:47 compute-0 podman[228487]: 2026-01-20 19:18:47.216237487 +0000 UTC m=+0.021224436 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:18:47 compute-0 podman[228487]: 2026-01-20 19:18:47.315495086 +0000 UTC m=+0.120482035 container init 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:18:47 compute-0 podman[228487]: 2026-01-20 19:18:47.322004513 +0000 UTC m=+0.126991442 container start 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:18:47 compute-0 podman[228487]: 2026-01-20 19:18:47.325466098 +0000 UTC m=+0.130453057 container attach 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Jan 20 19:18:47 compute-0 stoic_bassi[228529]: 167 167
Jan 20 19:18:47 compute-0 systemd[1]: libpod-6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027.scope: Deactivated successfully.
Jan 20 19:18:47 compute-0 podman[228487]: 2026-01-20 19:18:47.327520487 +0000 UTC m=+0.132507416 container died 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 19:18:47 compute-0 systemd[1]: var-lib-containers-storage-overlay-9488fc802a30c588fff5d29435ea64224755299514897f22415b1dfb5fa6e020-merged.mount: Deactivated successfully.
Jan 20 19:18:47 compute-0 podman[228487]: 2026-01-20 19:18:47.364105785 +0000 UTC m=+0.169092714 container remove 6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=stoic_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:18:47 compute-0 systemd[1]: libpod-conmon-6a2440f931c83a77190d94e5a4593bf9126356f4df4aa268e462637c91cba027.scope: Deactivated successfully.
Jan 20 19:18:47 compute-0 python3.9[228525]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:47 compute-0 sudo[228522]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:47 compute-0 podman[228555]: 2026-01-20 19:18:47.540505346 +0000 UTC m=+0.048843915 container create 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:18:47 compute-0 systemd[1]: Started libpod-conmon-2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861.scope.
Jan 20 19:18:47 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:47 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:18:47 compute-0 podman[228555]: 2026-01-20 19:18:47.607813971 +0000 UTC m=+0.116152620 container init 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:18:47 compute-0 podman[228555]: 2026-01-20 19:18:47.519185329 +0000 UTC m=+0.027523918 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:18:47 compute-0 podman[228555]: 2026-01-20 19:18:47.619657778 +0000 UTC m=+0.127996337 container start 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:18:47 compute-0 podman[228555]: 2026-01-20 19:18:47.624807893 +0000 UTC m=+0.133146482 container attach 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:18:47 compute-0 sudo[228735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxbbeasyinmxqubensrmsnajezaamhak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936727.6329577-571-165451762102135/AnsiballZ_file.py'
Jan 20 19:18:47 compute-0 sudo[228735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:48 compute-0 ceph-mon[75120]: pgmap v601: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:48 compute-0 python3.9[228737]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:48 compute-0 sudo[228735]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:48 compute-0 lvm[228879]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:18:48 compute-0 lvm[228883]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:18:48 compute-0 lvm[228879]: VG ceph_vg0 finished
Jan 20 19:18:48 compute-0 lvm[228883]: VG ceph_vg1 finished
Jan 20 19:18:48 compute-0 lvm[228901]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:18:48 compute-0 lvm[228901]: VG ceph_vg2 finished
Jan 20 19:18:48 compute-0 magical_driscoll[228595]: {}
Jan 20 19:18:48 compute-0 systemd[1]: libpod-2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861.scope: Deactivated successfully.
Jan 20 19:18:48 compute-0 podman[228555]: 2026-01-20 19:18:48.473759777 +0000 UTC m=+0.982098356 container died 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:18:48 compute-0 systemd[1]: libpod-2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861.scope: Consumed 1.382s CPU time.
Jan 20 19:18:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-75fa455e9f3dd7e3afbe1fa0e775d3f94db77b77a5198d0dca0c18f28f704ef6-merged.mount: Deactivated successfully.
Jan 20 19:18:48 compute-0 podman[228555]: 2026-01-20 19:18:48.523494975 +0000 UTC m=+1.031833544 container remove 2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_driscoll, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:18:48 compute-0 systemd[1]: libpod-conmon-2f850eb2d34e3af960bb12f3f166aaefbf2bdf6fb29b54226b3a9b3d38589861.scope: Deactivated successfully.
Jan 20 19:18:48 compute-0 sudo[228967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lywvovjgsqwgojtjwtxeskqwyioieujj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936728.2562473-571-133865771844179/AnsiballZ_file.py'
Jan 20 19:18:48 compute-0 sudo[228967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:48 compute-0 sudo[228352]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:18:48 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:18:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:18:48 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:18:48 compute-0 sudo[228970]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:18:48 compute-0 sudo[228970]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:18:48 compute-0 sudo[228970]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:48 compute-0 python3.9[228969]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:18:48 compute-0 sudo[228967]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:49 compute-0 sudo[229144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmwpiyiibizzsiedvqzqdveinvkmfnyu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936728.9477656-629-202948630839835/AnsiballZ_command.py'
Jan 20 19:18:49 compute-0 sudo[229144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:49 compute-0 python3.9[229146]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:49 compute-0 sudo[229144]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:18:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:18:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:50 compute-0 python3.9[229298]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 20 19:18:50 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 20 19:18:50 compute-0 sudo[229449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwduzolnqzjiqeuolohimwfzpfwuqyxr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936730.2981668-647-166043867215443/AnsiballZ_systemd_service.py'
Jan 20 19:18:50 compute-0 sudo[229449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:50 compute-0 ceph-mon[75120]: pgmap v602: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:50 compute-0 python3.9[229451]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:18:50 compute-0 systemd[1]: Reloading.
Jan 20 19:18:51 compute-0 systemd-sysv-generator[229482]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:18:51 compute-0 systemd-rc-local-generator[229478]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:18:51 compute-0 sudo[229449]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:51 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 20 19:18:51 compute-0 sudo[229637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zzgjvpfowevmwhtnwqzfzdzjvfgdifjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936731.3916388-655-125452802074294/AnsiballZ_command.py'
Jan 20 19:18:51 compute-0 sudo[229637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:51 compute-0 python3.9[229639]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:51 compute-0 sudo[229637]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:52 compute-0 ceph-mon[75120]: pgmap v603: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:52 compute-0 sudo[229790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvycliuololpqpzagdpfkitibsiwuvad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936732.0135238-655-191992514292910/AnsiballZ_command.py'
Jan 20 19:18:52 compute-0 sudo[229790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:52 compute-0 python3.9[229792]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:52 compute-0 sudo[229790]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:52 compute-0 sudo[229943]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrjkqxlqyoaeyheyewyqcickjhplphzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936732.5971518-655-61905655037979/AnsiballZ_command.py'
Jan 20 19:18:52 compute-0 sudo[229943]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:53 compute-0 python3.9[229945]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:53 compute-0 sudo[229943]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:53 compute-0 sudo[230096]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxxnauddpvhhedgibmqwvpwbahwtuwcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936733.1753318-655-162849075981573/AnsiballZ_command.py'
Jan 20 19:18:53 compute-0 sudo[230096]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:53 compute-0 python3.9[230098]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:53 compute-0 sudo[230096]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:54 compute-0 ceph-mon[75120]: pgmap v604: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:54 compute-0 sudo[230249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqjcuhvwcegsjwslzfajgfebbgherogr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936733.847492-655-61492193864987/AnsiballZ_command.py'
Jan 20 19:18:54 compute-0 sudo[230249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:54 compute-0 python3.9[230251]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:54 compute-0 sudo[230249]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:54 compute-0 sudo[230402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynikhluagowgduqxntodwdksdfsvydkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936734.4350414-655-71372670004571/AnsiballZ_command.py'
Jan 20 19:18:54 compute-0 sudo[230402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:54 compute-0 python3.9[230404]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:54 compute-0 sudo[230402]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:55 compute-0 sudo[230555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qewtdvafzsceewolubjdicmvtyttnizw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936734.9862332-655-103550675171411/AnsiballZ_command.py'
Jan 20 19:18:55 compute-0 sudo[230555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:55 compute-0 python3.9[230557]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:55 compute-0 sudo[230555]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:55 compute-0 sudo[230708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdiefcarfezewzcdempmsadimxhzdcqd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936735.53251-655-20844681462452/AnsiballZ_command.py'
Jan 20 19:18:55 compute-0 sudo[230708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:55 compute-0 python3.9[230710]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 20 19:18:55 compute-0 sudo[230708]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:56 compute-0 ceph-mon[75120]: pgmap v605: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:56 compute-0 sudo[230861]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tdnisouhpzubiimuidffvsmcryhuwfiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936736.7422764-734-72284132344456/AnsiballZ_file.py'
Jan 20 19:18:56 compute-0 sudo[230861]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:57 compute-0 python3.9[230863]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:18:57 compute-0 sudo[230861]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:57 compute-0 sudo[231013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmvhzqfcecoqaivlslvlijzpkmznqkly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936737.2899094-734-270476687344912/AnsiballZ_file.py'
Jan 20 19:18:57 compute-0 sudo[231013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:57 compute-0 python3.9[231015]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:18:57 compute-0 sudo[231013]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:58 compute-0 ceph-mon[75120]: pgmap v606: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:58 compute-0 sudo[231165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvefpcbcwxohhzwmthsxoquvrraunwgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936737.8617408-734-131061555258353/AnsiballZ_file.py'
Jan 20 19:18:58 compute-0 sudo[231165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:58 compute-0 python3.9[231167]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:18:58 compute-0 sudo[231165]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:58 compute-0 sudo[231317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bukxfcbmqoqhxpzwptvlghvrpphufbdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936738.5224226-756-216625014098668/AnsiballZ_file.py'
Jan 20 19:18:58 compute-0 sudo[231317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:18:58 compute-0 python3.9[231319]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:18:58 compute-0 sudo[231317]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:59 compute-0 sudo[231469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjfbaptjvzoaebcnulktcnhwehfxkemj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936739.1177413-756-37295269154485/AnsiballZ_file.py'
Jan 20 19:18:59 compute-0 sudo[231469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:18:59 compute-0 python3.9[231471]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:18:59 compute-0 sudo[231469]: pam_unix(sudo:session): session closed for user root
Jan 20 19:18:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:18:59 compute-0 sudo[231621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zfxfwavkmqvdjrkvknuwaiuoksjxresf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936739.7152145-756-250609693584573/AnsiballZ_file.py'
Jan 20 19:18:59 compute-0 sudo[231621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:00 compute-0 ceph-mon[75120]: pgmap v607: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:00 compute-0 python3.9[231623]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:00 compute-0 sudo[231621]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:00 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 20 19:19:00 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 20 19:19:00 compute-0 sudo[231775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yjnwhoukdrxdkaqeplbjvlvycraqhubo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936740.312159-756-271036612926070/AnsiballZ_file.py'
Jan 20 19:19:00 compute-0 sudo[231775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:00 compute-0 python3.9[231777]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:00 compute-0 sudo[231775]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:01 compute-0 sudo[231927]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoojrttexsxlnigfoyhatjisdrabdsdc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936740.9079847-756-280188300678958/AnsiballZ_file.py'
Jan 20 19:19:01 compute-0 sudo[231927]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:01 compute-0 python3.9[231929]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:01 compute-0 sudo[231927]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:01 compute-0 sudo[232079]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctirffrzmsrwlbyygjnisvurhrbqvozk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936741.4593344-756-277413989796125/AnsiballZ_file.py'
Jan 20 19:19:01 compute-0 sudo[232079]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:01 compute-0 python3.9[232081]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:01 compute-0 sudo[232079]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:02 compute-0 ceph-mon[75120]: pgmap v608: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:02 compute-0 sudo[232231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tvoowlhvfivmeheegeevxxwvjwqjbtkm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936742.1298463-756-84043094728422/AnsiballZ_file.py'
Jan 20 19:19:02 compute-0 sudo[232231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:02 compute-0 python3.9[232233]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:02 compute-0 sudo[232231]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:04 compute-0 ceph-mon[75120]: pgmap v609: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:19:05.444 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:19:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:19:05.445 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:19:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:19:05.445 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:19:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:06 compute-0 ceph-mon[75120]: pgmap v610: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:06 compute-0 podman[232258]: 2026-01-20 19:19:06.438112224 +0000 UTC m=+0.111125527 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Jan 20 19:19:07 compute-0 sudo[232410]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psagrpvtrffazyzcsgwsajzbvvnarczo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936747.0582-945-144942627593476/AnsiballZ_getent.py'
Jan 20 19:19:07 compute-0 sudo[232410]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:07 compute-0 python3.9[232412]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 20 19:19:07 compute-0 sudo[232410]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:08 compute-0 ceph-mon[75120]: pgmap v611: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:08 compute-0 sudo[232578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzpduohwlznfgnrsijrbmrwaoxhvtvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936747.8655803-953-208653452820165/AnsiballZ_group.py'
Jan 20 19:19:08 compute-0 sudo[232578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:08 compute-0 podman[232537]: 2026-01-20 19:19:08.330382571 +0000 UTC m=+0.083652972 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:19:08 compute-0 python3.9[232584]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 20 19:19:08 compute-0 groupadd[232586]: group added to /etc/group: name=nova, GID=42436
Jan 20 19:19:08 compute-0 groupadd[232586]: group added to /etc/gshadow: name=nova
Jan 20 19:19:08 compute-0 groupadd[232586]: new group: name=nova, GID=42436
Jan 20 19:19:08 compute-0 sudo[232578]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:09 compute-0 sudo[232741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-seejfksaebrwvcnybnfoyycsuzwoxrna ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936748.7638113-961-23031626960858/AnsiballZ_user.py'
Jan 20 19:19:09 compute-0 sudo[232741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:09 compute-0 python3.9[232743]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 20 19:19:09 compute-0 useradd[232745]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 20 19:19:09 compute-0 useradd[232745]: add 'nova' to group 'libvirt'
Jan 20 19:19:09 compute-0 useradd[232745]: add 'nova' to shadow group 'libvirt'
Jan 20 19:19:09 compute-0 sudo[232741]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:10 compute-0 ceph-mon[75120]: pgmap v612: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:10 compute-0 sshd-session[232776]: Accepted publickey for zuul from 192.168.122.30 port 47608 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:19:10 compute-0 systemd-logind[797]: New session 51 of user zuul.
Jan 20 19:19:10 compute-0 systemd[1]: Started Session 51 of User zuul.
Jan 20 19:19:10 compute-0 sshd-session[232776]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:19:10 compute-0 sshd-session[232779]: Received disconnect from 192.168.122.30 port 47608:11: disconnected by user
Jan 20 19:19:10 compute-0 sshd-session[232779]: Disconnected from user zuul 192.168.122.30 port 47608
Jan 20 19:19:10 compute-0 sshd-session[232776]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:19:10 compute-0 systemd[1]: session-51.scope: Deactivated successfully.
Jan 20 19:19:10 compute-0 systemd-logind[797]: Session 51 logged out. Waiting for processes to exit.
Jan 20 19:19:10 compute-0 systemd-logind[797]: Removed session 51.
Jan 20 19:19:11 compute-0 python3.9[232929]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:11 compute-0 python3.9[233050]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936750.8253129-986-121421234066519/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:12 compute-0 ceph-mon[75120]: pgmap v613: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:12 compute-0 python3.9[233200]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:12 compute-0 python3.9[233276]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:13 compute-0 python3.9[233426]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:13 compute-0 python3.9[233547]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936752.8455598-986-133233329163324/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:14 compute-0 ceph-mon[75120]: pgmap v614: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:14 compute-0 python3.9[233697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:14 compute-0 python3.9[233818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936753.836776-986-50083820915030/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:15 compute-0 python3.9[233968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:15 compute-0 python3.9[234089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936754.897706-986-118519131702604/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:16 compute-0 ceph-mon[75120]: pgmap v615: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:16 compute-0 python3.9[234239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:16 compute-0 python3.9[234360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936755.8897028-986-225756935706964/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:17 compute-0 sudo[234510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlzorpnkbzvuxbxrummfzuicldzfxhjs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936756.9863255-1069-129703003682862/AnsiballZ_file.py'
Jan 20 19:19:17 compute-0 sudo[234510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:17 compute-0 python3.9[234512]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:19:17 compute-0 sudo[234510]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:17 compute-0 sudo[234662]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hurgvdwszlpcmfxwkwbanqfvtkjirpof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936757.5583427-1077-265071709750259/AnsiballZ_copy.py'
Jan 20 19:19:17 compute-0 sudo[234662]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:18 compute-0 ceph-mon[75120]: pgmap v616: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:18 compute-0 python3.9[234664]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:19:18 compute-0 sudo[234662]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:18 compute-0 sudo[234814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvwjnnqgqytckdrcedygqoqgckplowzd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936758.2032042-1085-13596472831396/AnsiballZ_stat.py'
Jan 20 19:19:18 compute-0 sudo[234814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:18 compute-0 python3.9[234816]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:19:18 compute-0 sudo[234814]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:19 compute-0 sudo[234966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokhcbbsjrlabmlrihzptdvnrbfkagpw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936758.786734-1093-179629866760944/AnsiballZ_stat.py'
Jan 20 19:19:19 compute-0 sudo[234966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:19 compute-0 python3.9[234968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:19 compute-0 sudo[234966]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:19 compute-0 sudo[235089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srkdjruuydycrtnmucdfcqasnicrxent ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936758.786734-1093-179629866760944/AnsiballZ_copy.py'
Jan 20 19:19:19 compute-0 sudo[235089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:19 compute-0 python3.9[235091]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1768936758.786734-1093-179629866760944/.source _original_basename=.kok20ry_ follow=False checksum=73adea5ed8aa8586a32b875bac50fd818bde17fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 20 19:19:19 compute-0 sudo[235089]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:20 compute-0 ceph-mon[75120]: pgmap v617: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:20 compute-0 python3.9[235243]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:19:21 compute-0 python3.9[235395]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:21 compute-0 python3.9[235516]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936760.6883092-1119-132680697012746/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:22 compute-0 ceph-mon[75120]: pgmap v618: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:22 compute-0 python3.9[235666]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 20 19:19:22 compute-0 python3.9[235787]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1768936761.801784-1134-13881176490092/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 20 19:19:23 compute-0 sudo[235937]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hhgeyqrzjqrhxcbfphbdpnlcnrvdhlmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936763.1386535-1151-34203554643167/AnsiballZ_container_config_data.py'
Jan 20 19:19:23 compute-0 sudo[235937]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:23 compute-0 python3.9[235939]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 20 19:19:23 compute-0 sudo[235937]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:24 compute-0 ceph-mon[75120]: pgmap v619: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:24 compute-0 sudo[236089]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufvsulinefcchgijyeleuikkxthynkdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936764.0321562-1162-195847162404959/AnsiballZ_container_config_hash.py'
Jan 20 19:19:24 compute-0 sudo[236089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:24 compute-0 python3.9[236091]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 19:19:24 compute-0 sudo[236089]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:25 compute-0 sudo[236241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aiihfmycsnuadzgqowhmymcylbenljgr ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768936765.1389632-1172-218271983724007/AnsiballZ_edpm_container_manage.py'
Jan 20 19:19:25 compute-0 sudo[236241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:25 compute-0 python3[236243]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 19:19:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:26 compute-0 ceph-mon[75120]: pgmap v620: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:29 compute-0 ceph-mon[75120]: pgmap v621: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:30 compute-0 ceph-mon[75120]: pgmap v622: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:19:31
Jan 20 19:19:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:19:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:19:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'volumes', '.rgw.root', '.mgr', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms']
Jan 20 19:19:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:19:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:32 compute-0 ceph-mon[75120]: pgmap v623: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:19:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:19:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:36 compute-0 ceph-mon[75120]: pgmap v624: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:36 compute-0 podman[236256]: 2026-01-20 19:19:36.617035997 +0000 UTC m=+10.696701287 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 19:19:36 compute-0 podman[236339]: 2026-01-20 19:19:36.766674848 +0000 UTC m=+0.066635298 container create d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.build-date=20251202, managed_by=edpm_ansible)
Jan 20 19:19:36 compute-0 podman[236339]: 2026-01-20 19:19:36.724072595 +0000 UTC m=+0.024033065 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 19:19:36 compute-0 python3[236243]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 20 19:19:36 compute-0 sudo[236241]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:37 compute-0 sudo[236537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpvoplcyfhklzjczfhemgwenehnsbhpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936777.0439515-1180-147849602863348/AnsiballZ_stat.py'
Jan 20 19:19:37 compute-0 sudo[236537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:37 compute-0 podman[236501]: 2026-01-20 19:19:37.342424883 +0000 UTC m=+0.080881094 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:19:37 compute-0 python3.9[236546]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:19:37 compute-0 sudo[236537]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:37 compute-0 ceph-mon[75120]: pgmap v625: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 15 op/s
Jan 20 19:19:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:38 compute-0 sudo[236707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qnhaqsarqgxcugddheduoowojsudnvcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936777.8821445-1192-174430690274510/AnsiballZ_container_config_data.py'
Jan 20 19:19:38 compute-0 sudo[236707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:38 compute-0 python3.9[236709]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 20 19:19:38 compute-0 sudo[236707]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:38 compute-0 ceph-mon[75120]: pgmap v626: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:38 compute-0 podman[236833]: 2026-01-20 19:19:38.902816684 +0000 UTC m=+0.054030672 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 20 19:19:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:38 compute-0 sudo[236876]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfiyvyeugxooxsbpkbjgucvtatqqkxay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936778.6322014-1203-142148826040410/AnsiballZ_container_config_hash.py'
Jan 20 19:19:38 compute-0 sudo[236876]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:39 compute-0 python3.9[236880]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 20 19:19:39 compute-0 sudo[236876]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:39 compute-0 sudo[237030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvvcfkolushkdusctletuevbzvvealju ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1768936779.3676803-1213-30877291809116/AnsiballZ_edpm_container_manage.py'
Jan 20 19:19:39 compute-0 sudo[237030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:39 compute-0 python3[237032]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 20 19:19:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:40 compute-0 ceph-mon[75120]: pgmap v627: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:40 compute-0 podman[237067]: 2026-01-20 19:19:40.070150677 +0000 UTC m=+0.047038203 container create 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:19:40 compute-0 podman[237067]: 2026-01-20 19:19:40.043284865 +0000 UTC m=+0.020172421 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 20 19:19:40 compute-0 python3[237032]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 20 19:19:40 compute-0 sudo[237030]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:40 compute-0 sudo[237255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilmcnvbbjpkdesszzdqtcfdbbrcfxsoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936780.3216088-1221-76546274177769/AnsiballZ_stat.py'
Jan 20 19:19:40 compute-0 sudo[237255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:40 compute-0 python3.9[237257]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:19:40 compute-0 sudo[237255]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:41 compute-0 sudo[237409]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waslroellhwifzszzzaarcqfzduvwmub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936781.0157259-1230-181942432829096/AnsiballZ_file.py'
Jan 20 19:19:41 compute-0 sudo[237409]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:43 compute-0 python3.9[237411]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:19:43 compute-0 sudo[237409]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:43 compute-0 ceph-mon[75120]: pgmap v628: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.919042) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936783919104, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1867, "num_deletes": 250, "total_data_size": 3154602, "memory_usage": 3194392, "flush_reason": "Manual Compaction"}
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936783933985, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1773559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11778, "largest_seqno": 13644, "table_properties": {"data_size": 1767537, "index_size": 3033, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15072, "raw_average_key_size": 20, "raw_value_size": 1754200, "raw_average_value_size": 2338, "num_data_blocks": 140, "num_entries": 750, "num_filter_entries": 750, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936570, "oldest_key_time": 1768936570, "file_creation_time": 1768936783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 15010 microseconds, and 5529 cpu microseconds.
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.934058) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1773559 bytes OK
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.934078) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.935696) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.935710) EVENT_LOG_v1 {"time_micros": 1768936783935707, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.935726) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 3146755, prev total WAL file size 3146755, number of live WAL files 2.
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.936529) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1731KB)], [29(7980KB)]
Jan 20 19:19:43 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936783936610, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9945303, "oldest_snapshot_seqno": -1}
Jan 20 19:19:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4061 keys, 7935047 bytes, temperature: kUnknown
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936784030587, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7935047, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7906056, "index_size": 17745, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10181, "raw_key_size": 96447, "raw_average_key_size": 23, "raw_value_size": 7831047, "raw_average_value_size": 1928, "num_data_blocks": 771, "num_entries": 4061, "num_filter_entries": 4061, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936783, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.030785) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7935047 bytes
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.032012) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 105.8 rd, 84.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.7, 7.8 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(10.1) write-amplify(4.5) OK, records in: 4473, records dropped: 412 output_compression: NoCompression
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.032028) EVENT_LOG_v1 {"time_micros": 1768936784032020, "job": 12, "event": "compaction_finished", "compaction_time_micros": 94031, "compaction_time_cpu_micros": 18148, "output_level": 6, "num_output_files": 1, "total_output_size": 7935047, "num_input_records": 4473, "num_output_records": 4061, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936784032492, "job": 12, "event": "table_file_deletion", "file_number": 31}
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936784033878, "job": 12, "event": "table_file_deletion", "file_number": 29}
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:43.936433) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034000) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034011) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:44 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:19:44.034016) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:19:44 compute-0 sudo[237560]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcxfhjvjgwaqpudukdgbtlibrtkdbffk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936783.7317803-1230-52676674584819/AnsiballZ_copy.py'
Jan 20 19:19:44 compute-0 sudo[237560]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:44 compute-0 python3.9[237562]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1768936783.7317803-1230-52676674584819/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 20 19:19:44 compute-0 sudo[237560]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:44 compute-0 sudo[237636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlzxqvphoqzjnmbmlzsvkndmpshybmon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936783.7317803-1230-52676674584819/AnsiballZ_systemd.py'
Jan 20 19:19:44 compute-0 sudo[237636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:19:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:19:44 compute-0 python3.9[237638]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 20 19:19:44 compute-0 systemd[1]: Reloading.
Jan 20 19:19:44 compute-0 systemd-rc-local-generator[237666]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:19:44 compute-0 ceph-mon[75120]: pgmap v629: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:44 compute-0 systemd-sysv-generator[237670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:19:45 compute-0 sudo[237636]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:45 compute-0 sudo[237747]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nccvhskgpmsqbtgtlkmdpxirosgwwwvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936783.7317803-1230-52676674584819/AnsiballZ_systemd.py'
Jan 20 19:19:45 compute-0 sudo[237747]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:45 compute-0 python3.9[237749]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 20 19:19:45 compute-0 systemd[1]: Reloading.
Jan 20 19:19:45 compute-0 systemd-rc-local-generator[237780]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 20 19:19:45 compute-0 systemd-sysv-generator[237783]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 20 19:19:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:46 compute-0 ceph-mon[75120]: pgmap v630: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:46 compute-0 systemd[1]: Starting nova_compute container...
Jan 20 19:19:46 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:46 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:46 compute-0 podman[237790]: 2026-01-20 19:19:46.325733964 +0000 UTC m=+0.098287847 container init 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 20 19:19:46 compute-0 podman[237790]: 2026-01-20 19:19:46.331411872 +0000 UTC m=+0.103965735 container start 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 19:19:46 compute-0 podman[237790]: nova_compute
Jan 20 19:19:46 compute-0 nova_compute[237805]: + sudo -E kolla_set_configs
Jan 20 19:19:46 compute-0 systemd[1]: Started nova_compute container.
Jan 20 19:19:46 compute-0 sudo[237747]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Validating config file
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying service configuration files
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Deleting /etc/ceph
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Creating directory /etc/ceph
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Writing out command to execute
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:19:46 compute-0 nova_compute[237805]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 19:19:46 compute-0 nova_compute[237805]: ++ cat /run_command
Jan 20 19:19:46 compute-0 nova_compute[237805]: + CMD=nova-compute
Jan 20 19:19:46 compute-0 nova_compute[237805]: + ARGS=
Jan 20 19:19:46 compute-0 nova_compute[237805]: + sudo kolla_copy_cacerts
Jan 20 19:19:46 compute-0 nova_compute[237805]: + [[ ! -n '' ]]
Jan 20 19:19:46 compute-0 nova_compute[237805]: + . kolla_extend_start
Jan 20 19:19:46 compute-0 nova_compute[237805]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 19:19:46 compute-0 nova_compute[237805]: Running command: 'nova-compute'
Jan 20 19:19:46 compute-0 nova_compute[237805]: + umask 0022
Jan 20 19:19:46 compute-0 nova_compute[237805]: + exec nova-compute
Jan 20 19:19:47 compute-0 python3.9[237966]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:19:47 compute-0 python3.9[238117]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:19:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:48 compute-0 ceph-mon[75120]: pgmap v631: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:48 compute-0 nova_compute[237805]: 2026-01-20 19:19:48.513 237809 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:19:48 compute-0 nova_compute[237805]: 2026-01-20 19:19:48.513 237809 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:19:48 compute-0 nova_compute[237805]: 2026-01-20 19:19:48.513 237809 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:19:48 compute-0 nova_compute[237805]: 2026-01-20 19:19:48.513 237809 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 20 19:19:48 compute-0 python3.9[238267]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 20 19:19:48 compute-0 nova_compute[237805]: 2026-01-20 19:19:48.664 237809 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:19:48 compute-0 nova_compute[237805]: 2026-01-20 19:19:48.680 237809 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:19:48 compute-0 nova_compute[237805]: 2026-01-20 19:19:48.680 237809 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 19:19:48 compute-0 sudo[238294]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:19:48 compute-0 sudo[238294]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:48 compute-0 sudo[238294]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:48 compute-0 sudo[238321]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:19:48 compute-0 sudo[238321]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:49 compute-0 sudo[238489]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onvtiepwcxqrvuwwhtojqwhuwrsjntnh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936788.7789958-1290-65203503297707/AnsiballZ_podman_container.py'
Jan 20 19:19:49 compute-0 sudo[238489]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.249 237809 INFO nova.virt.driver [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 20 19:19:49 compute-0 sudo[238321]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:19:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:19:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:19:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:19:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:19:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:19:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:19:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:19:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:19:49 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:19:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:19:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.380 237809 INFO nova.compute.provider_config [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 20 19:19:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:19:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:19:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:19:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:19:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:19:49 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:19:49 compute-0 sudo[238504]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:19:49 compute-0 sudo[238504]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.398 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.399 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.399 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.399 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.399 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 sudo[238504]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.400 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.401 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.402 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.403 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.404 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.405 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.406 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.407 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.408 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.409 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.410 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.411 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.412 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.413 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.414 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.415 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.416 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.417 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.418 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.419 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.420 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.421 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.422 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.423 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.424 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.425 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.426 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.427 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.428 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.429 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.430 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.431 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.432 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.433 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.434 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.435 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.436 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.437 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.438 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.439 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.440 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.441 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.442 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.443 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.444 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.445 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.446 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.447 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.448 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.449 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.450 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.450 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.450 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.450 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.451 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.451 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.451 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.451 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.452 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 sudo[238529]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.452 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.452 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.453 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 sudo[238529]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.454 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.455 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.455 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.455 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.455 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.456 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.457 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.458 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.458 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.458 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.458 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.459 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.460 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.460 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.460 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.460 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.461 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.461 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.461 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.461 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.462 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.463 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.463 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.463 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.463 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.464 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.465 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.466 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.466 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.466 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.466 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.467 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.467 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.467 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.467 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.468 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.469 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.470 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.470 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.470 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.470 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.471 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.472 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.473 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.473 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.473 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.473 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.474 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.474 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.474 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.474 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.475 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.475 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.475 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.475 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.476 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.476 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.476 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.476 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.477 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.478 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.478 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.478 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.478 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.479 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.480 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.480 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.480 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.480 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.481 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.482 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.482 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.482 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.482 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.483 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.484 237809 WARNING oslo_config.cfg [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 19:19:49 compute-0 nova_compute[237805]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 19:19:49 compute-0 nova_compute[237805]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 19:19:49 compute-0 nova_compute[237805]: and ``live_migration_inbound_addr`` respectively.
Jan 20 19:19:49 compute-0 nova_compute[237805]: ).  Its value may be silently ignored in the future.
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.484 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.484 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.485 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.485 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.485 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.485 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.486 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.487 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.487 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.487 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.487 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.488 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.488 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.488 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.488 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_secret_uuid        = 90fff835-31df-513f-a409-b6642f04e6ac log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.489 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.490 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.490 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.490 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.490 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.491 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.491 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.491 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.491 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.492 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.493 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.494 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.494 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.494 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.494 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.495 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.495 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.495 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.495 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.496 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.497 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.498 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.498 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.498 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.498 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.499 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.500 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.500 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.500 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.500 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.501 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.502 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.502 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.502 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.502 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.503 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.504 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.505 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.505 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.505 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.505 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.506 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.507 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.507 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.507 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.507 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.508 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.509 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.509 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.509 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.509 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.510 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.511 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.512 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.512 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.512 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.512 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.513 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.514 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.515 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 python3.9[238492]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.515 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.515 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.515 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.516 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.517 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.518 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.519 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.520 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.521 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.522 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.523 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.524 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.525 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.526 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.527 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.528 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.529 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.530 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.531 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.532 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.533 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.534 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.535 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.536 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.537 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.538 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.539 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.540 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.541 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.542 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.543 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.544 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.545 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.546 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.547 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.548 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.549 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.550 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.551 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.552 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.553 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.554 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.555 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.556 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.557 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.558 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.559 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.560 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.561 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.562 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.563 237809 DEBUG oslo_service.service [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.564 237809 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.579 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.579 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.580 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.580 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 20 19:19:49 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 20 19:19:49 compute-0 sudo[238489]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:49 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.641 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fddfe88f460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.644 237809 DEBUG nova.virt.libvirt.host [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fddfe88f460> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.646 237809 INFO nova.virt.libvirt.driver [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Connection event '1' reason 'None'
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.659 237809 WARNING nova.virt.libvirt.driver [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 20 19:19:49 compute-0 nova_compute[237805]: 2026-01-20 19:19:49.660 237809 DEBUG nova.virt.libvirt.volume.mount [None req-69c53445-21e5-4a27-856a-d4d0f8aca529 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 20 19:19:49 compute-0 podman[238655]: 2026-01-20 19:19:49.748915488 +0000 UTC m=+0.050648732 container create 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3)
Jan 20 19:19:49 compute-0 systemd[1]: Started libpod-conmon-8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2.scope.
Jan 20 19:19:49 compute-0 podman[238655]: 2026-01-20 19:19:49.722949507 +0000 UTC m=+0.024682781 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:19:49 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:49 compute-0 podman[238655]: 2026-01-20 19:19:49.82981177 +0000 UTC m=+0.131545044 container init 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:19:49 compute-0 podman[238655]: 2026-01-20 19:19:49.835556409 +0000 UTC m=+0.137289653 container start 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:19:49 compute-0 podman[238655]: 2026-01-20 19:19:49.83841337 +0000 UTC m=+0.140146634 container attach 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True)
Jan 20 19:19:49 compute-0 recursing_wilbur[238702]: 167 167
Jan 20 19:19:49 compute-0 systemd[1]: libpod-8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2.scope: Deactivated successfully.
Jan 20 19:19:49 compute-0 podman[238655]: 2026-01-20 19:19:49.840819177 +0000 UTC m=+0.142552431 container died 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:19:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-f350fc7b68c25a168bcd89e5f7802c3f0b110d84a5f1f8b2ab81bf8edc34d1d2-merged.mount: Deactivated successfully.
Jan 20 19:19:49 compute-0 podman[238655]: 2026-01-20 19:19:49.877130389 +0000 UTC m=+0.178863633 container remove 8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=recursing_wilbur, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:19:49 compute-0 systemd[1]: libpod-conmon-8ed758fe5dbb101ed1259c12164dfd1166f5f92bee3b2059b32d2f3edae809f2.scope: Deactivated successfully.
Jan 20 19:19:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:50 compute-0 podman[238801]: 2026-01-20 19:19:50.040023742 +0000 UTC m=+0.048555589 container create 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 19:19:50 compute-0 sudo[238841]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wwpbixqyttbpqkpwkojgetddxlvoztwr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936789.7858112-1298-140525954204253/AnsiballZ_systemd.py'
Jan 20 19:19:50 compute-0 sudo[238841]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:50 compute-0 systemd[1]: Started libpod-conmon-06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78.scope.
Jan 20 19:19:50 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:50 compute-0 podman[238801]: 2026-01-20 19:19:50.022103758 +0000 UTC m=+0.030635635 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:19:50 compute-0 podman[238801]: 2026-01-20 19:19:50.134103826 +0000 UTC m=+0.142635673 container init 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:19:50 compute-0 podman[238801]: 2026-01-20 19:19:50.142632763 +0000 UTC m=+0.151164610 container start 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:19:50 compute-0 podman[238801]: 2026-01-20 19:19:50.147199524 +0000 UTC m=+0.155731411 container attach 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:19:50 compute-0 python3.9[238843]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 20 19:19:50 compute-0 ceph-mon[75120]: pgmap v632: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:50 compute-0 systemd[1]: Stopping nova_compute container...
Jan 20 19:19:50 compute-0 nova_compute[237805]: 2026-01-20 19:19:50.448 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:19:50 compute-0 nova_compute[237805]: 2026-01-20 19:19:50.449 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:19:50 compute-0 nova_compute[237805]: 2026-01-20 19:19:50.450 237809 DEBUG oslo_concurrency.lockutils [None req-c99e109d-1cb3-4fe1-9f19-e6c47f4cbb04 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:19:50 compute-0 focused_hofstadter[238846]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:19:50 compute-0 focused_hofstadter[238846]: --> All data devices are unavailable
Jan 20 19:19:50 compute-0 systemd[1]: libpod-06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78.scope: Deactivated successfully.
Jan 20 19:19:50 compute-0 podman[238801]: 2026-01-20 19:19:50.668146997 +0000 UTC m=+0.676678874 container died 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030)
Jan 20 19:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-bfd1e54545df10a12b7c4d91b3f29c99257d3a8cfd4eb6fc2ef9a57a554d3b85-merged.mount: Deactivated successfully.
Jan 20 19:19:50 compute-0 podman[238801]: 2026-01-20 19:19:50.709690406 +0000 UTC m=+0.718222253 container remove 06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=focused_hofstadter, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:19:50 compute-0 systemd[1]: libpod-conmon-06fba6ba25e88c3668f088cc225ba9217620c056d1e47c6411d9cf0f5649db78.scope: Deactivated successfully.
Jan 20 19:19:50 compute-0 sudo[238529]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:50 compute-0 sudo[238902]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:19:50 compute-0 sudo[238902]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:50 compute-0 sudo[238902]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:50 compute-0 sudo[238927]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:19:50 compute-0 sudo[238927]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:50 compute-0 virtqemud[238596]: libvirt version: 11.10.0, package: 2.el9 (builder@centos.org, 2025-12-18-15:09:54, )
Jan 20 19:19:50 compute-0 systemd[1]: libpod-26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0.scope: Deactivated successfully.
Jan 20 19:19:50 compute-0 virtqemud[238596]: hostname: compute-0
Jan 20 19:19:50 compute-0 virtqemud[238596]: End of file while reading data: Input/output error
Jan 20 19:19:50 compute-0 systemd[1]: libpod-26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0.scope: Consumed 3.083s CPU time.
Jan 20 19:19:50 compute-0 podman[238864]: 2026-01-20 19:19:50.930139966 +0000 UTC m=+0.517649744 container died 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, config_id=edpm, io.buildah.version=1.41.3)
Jan 20 19:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0-userdata-shm.mount: Deactivated successfully.
Jan 20 19:19:50 compute-0 systemd[1]: var-lib-containers-storage-overlay-cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec-merged.mount: Deactivated successfully.
Jan 20 19:19:51 compute-0 podman[238864]: 2026-01-20 19:19:51.947740254 +0000 UTC m=+1.535250032 container cleanup 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:19:51 compute-0 podman[238864]: nova_compute
Jan 20 19:19:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:52 compute-0 podman[238979]: 2026-01-20 19:19:52.038736833 +0000 UTC m=+0.071943247 container create a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:19:52 compute-0 ceph-mon[75120]: pgmap v633: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:52 compute-0 podman[238981]: nova_compute
Jan 20 19:19:52 compute-0 podman[238979]: 2026-01-20 19:19:51.989347105 +0000 UTC m=+0.022553539 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:19:52 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 20 19:19:52 compute-0 systemd[1]: Stopped nova_compute container.
Jan 20 19:19:52 compute-0 systemd[1]: Started libpod-conmon-a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f.scope.
Jan 20 19:19:52 compute-0 systemd[1]: Starting nova_compute container...
Jan 20 19:19:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:52 compute-0 podman[238979]: 2026-01-20 19:19:52.139272713 +0000 UTC m=+0.172479207 container init a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:19:52 compute-0 friendly_antonelli[239007]: 167 167
Jan 20 19:19:52 compute-0 podman[238979]: 2026-01-20 19:19:52.153739674 +0000 UTC m=+0.186946098 container start a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:19:52 compute-0 systemd[1]: libpod-a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f.scope: Deactivated successfully.
Jan 20 19:19:52 compute-0 conmon[239007]: conmon a9de9ad3b6aa1ba11698 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f.scope/container/memory.events
Jan 20 19:19:52 compute-0 podman[238979]: 2026-01-20 19:19:52.158687825 +0000 UTC m=+0.191894319 container attach a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 19:19:52 compute-0 podman[238979]: 2026-01-20 19:19:52.159110285 +0000 UTC m=+0.192316719 container died a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:19:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-94e69433b160e0ff16a18427dc4f2cb64d7db8b5fc3524962681a32ece01a63d-merged.mount: Deactivated successfully.
Jan 20 19:19:52 compute-0 podman[238979]: 2026-01-20 19:19:52.217517962 +0000 UTC m=+0.250724416 container remove a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=friendly_antonelli, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:19:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cbb93bf99c72a79384e468b1bb2ce45b92af13f9a65626e0fa2d1b10a713f4ec/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 systemd[1]: libpod-conmon-a9de9ad3b6aa1ba1169892acf265868054829e1bff1b47119e74cc9ebff7247f.scope: Deactivated successfully.
Jan 20 19:19:52 compute-0 podman[239008]: 2026-01-20 19:19:52.241823932 +0000 UTC m=+0.120083946 container init 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 19:19:52 compute-0 podman[239008]: 2026-01-20 19:19:52.24752077 +0000 UTC m=+0.125780764 container start 26c9d359a695c22bda9b446a7e43acebc3baa53fef49397ec79d4762fb5d6ca0 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:19:52 compute-0 podman[239008]: nova_compute
Jan 20 19:19:52 compute-0 nova_compute[239038]: + sudo -E kolla_set_configs
Jan 20 19:19:52 compute-0 systemd[1]: Started nova_compute container.
Jan 20 19:19:52 compute-0 sudo[238841]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Validating config file
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying service configuration files
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /etc/ceph
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Creating directory /etc/ceph
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Writing out command to execute
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:19:52 compute-0 nova_compute[239038]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 20 19:19:52 compute-0 nova_compute[239038]: ++ cat /run_command
Jan 20 19:19:52 compute-0 nova_compute[239038]: + CMD=nova-compute
Jan 20 19:19:52 compute-0 nova_compute[239038]: + ARGS=
Jan 20 19:19:52 compute-0 nova_compute[239038]: + sudo kolla_copy_cacerts
Jan 20 19:19:52 compute-0 nova_compute[239038]: + [[ ! -n '' ]]
Jan 20 19:19:52 compute-0 nova_compute[239038]: + . kolla_extend_start
Jan 20 19:19:52 compute-0 nova_compute[239038]: + echo 'Running command: '\''nova-compute'\'''
Jan 20 19:19:52 compute-0 nova_compute[239038]: Running command: 'nova-compute'
Jan 20 19:19:52 compute-0 nova_compute[239038]: + umask 0022
Jan 20 19:19:52 compute-0 nova_compute[239038]: + exec nova-compute
Jan 20 19:19:52 compute-0 podman[239062]: 2026-01-20 19:19:52.385101779 +0000 UTC m=+0.044256394 container create 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:19:52 compute-0 systemd[1]: Started libpod-conmon-4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea.scope.
Jan 20 19:19:52 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:52 compute-0 podman[239062]: 2026-01-20 19:19:52.367505953 +0000 UTC m=+0.026660368 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:19:52 compute-0 podman[239062]: 2026-01-20 19:19:52.476019006 +0000 UTC m=+0.135173441 container init 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3)
Jan 20 19:19:52 compute-0 podman[239062]: 2026-01-20 19:19:52.486607613 +0000 UTC m=+0.145762018 container start 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle)
Jan 20 19:19:52 compute-0 podman[239062]: 2026-01-20 19:19:52.491393179 +0000 UTC m=+0.150547604 container attach 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3)
Jan 20 19:19:52 compute-0 serene_wozniak[239098]: {
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:     "0": [
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:         {
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "devices": [
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "/dev/loop3"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             ],
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_name": "ceph_lv0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_size": "21470642176",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "name": "ceph_lv0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "tags": {
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cluster_name": "ceph",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.crush_device_class": "",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.encrypted": "0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.objectstore": "bluestore",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osd_id": "0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.type": "block",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.vdo": "0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.with_tpm": "0"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             },
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "type": "block",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "vg_name": "ceph_vg0"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:         }
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:     ],
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:     "1": [
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:         {
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "devices": [
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "/dev/loop4"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             ],
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_name": "ceph_lv1",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_size": "21470642176",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "name": "ceph_lv1",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "tags": {
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cluster_name": "ceph",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.crush_device_class": "",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.encrypted": "0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.objectstore": "bluestore",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osd_id": "1",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.type": "block",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.vdo": "0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.with_tpm": "0"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             },
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "type": "block",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "vg_name": "ceph_vg1"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:         }
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:     ],
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:     "2": [
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:         {
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "devices": [
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "/dev/loop5"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             ],
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_name": "ceph_lv2",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_size": "21470642176",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "name": "ceph_lv2",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "tags": {
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.cluster_name": "ceph",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.crush_device_class": "",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.encrypted": "0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.objectstore": "bluestore",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osd_id": "2",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.type": "block",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.vdo": "0",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:                 "ceph.with_tpm": "0"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             },
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "type": "block",
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:             "vg_name": "ceph_vg2"
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:         }
Jan 20 19:19:52 compute-0 serene_wozniak[239098]:     ]
Jan 20 19:19:52 compute-0 serene_wozniak[239098]: }
Jan 20 19:19:52 compute-0 systemd[1]: libpod-4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea.scope: Deactivated successfully.
Jan 20 19:19:52 compute-0 podman[239062]: 2026-01-20 19:19:52.812057112 +0000 UTC m=+0.471211517 container died 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:19:52 compute-0 sudo[239232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-znuowwmjbmmggfmujkxhjsqcirybiyhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1768936792.53321-1307-116828933071238/AnsiballZ_podman_container.py'
Jan 20 19:19:52 compute-0 sudo[239232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:19:52 compute-0 podman[239062]: 2026-01-20 19:19:52.860810765 +0000 UTC m=+0.519965170 container remove 4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=serene_wozniak, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:19:52 compute-0 systemd[1]: libpod-conmon-4d34462c70b931430843896eda04cb5815a107956296ab06b1bf29928adbe1ea.scope: Deactivated successfully.
Jan 20 19:19:52 compute-0 sudo[238927]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:52 compute-0 sudo[239245]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:19:52 compute-0 sudo[239245]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:52 compute-0 sudo[239245]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:53 compute-0 sudo[239270]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:19:53 compute-0 sudo[239270]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay-4a6fb5545428b9815f585fb8e2f2d5e33b0873efe2702b1e9565902bd7136dd5-merged.mount: Deactivated successfully.
Jan 20 19:19:53 compute-0 python3.9[239235]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 20 19:19:53 compute-0 systemd[1]: Started libpod-conmon-d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb.scope.
Jan 20 19:19:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05ca57c0648f38e5192c173d39f2cee12a37bc51f4d3055824e69624120fb3/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05ca57c0648f38e5192c173d39f2cee12a37bc51f4d3055824e69624120fb3/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee05ca57c0648f38e5192c173d39f2cee12a37bc51f4d3055824e69624120fb3/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:53 compute-0 podman[239320]: 2026-01-20 19:19:53.291294584 +0000 UTC m=+0.111331894 container init d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 19:19:53 compute-0 podman[239320]: 2026-01-20 19:19:53.299667867 +0000 UTC m=+0.119705157 container start d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, io.buildah.version=1.41.3)
Jan 20 19:19:53 compute-0 python3.9[239235]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Applying nova statedir ownership
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 20 19:19:53 compute-0 nova_compute_init[239358]: INFO:nova_statedir:Nova statedir ownership complete
Jan 20 19:19:53 compute-0 podman[239348]: 2026-01-20 19:19:53.354785904 +0000 UTC m=+0.057270731 container create 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:19:53 compute-0 systemd[1]: libpod-d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb.scope: Deactivated successfully.
Jan 20 19:19:53 compute-0 podman[239365]: 2026-01-20 19:19:53.372760371 +0000 UTC m=+0.031425564 container died d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, container_name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 20 19:19:53 compute-0 systemd[1]: Started libpod-conmon-59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46.scope.
Jan 20 19:19:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:53 compute-0 podman[239348]: 2026-01-20 19:19:53.419825762 +0000 UTC m=+0.122310609 container init 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:19:53 compute-0 podman[239348]: 2026-01-20 19:19:53.426338121 +0000 UTC m=+0.128822948 container start 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:19:53 compute-0 podman[239348]: 2026-01-20 19:19:53.334291557 +0000 UTC m=+0.036776404 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:19:53 compute-0 eager_bhaskara[239384]: 167 167
Jan 20 19:19:53 compute-0 systemd[1]: libpod-59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46.scope: Deactivated successfully.
Jan 20 19:19:53 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb-userdata-shm.mount: Deactivated successfully.
Jan 20 19:19:53 compute-0 podman[239348]: 2026-01-20 19:19:53.452287431 +0000 UTC m=+0.154772278 container attach 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:19:53 compute-0 podman[239348]: 2026-01-20 19:19:53.452825174 +0000 UTC m=+0.155310001 container died 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:19:53 compute-0 podman[239376]: 2026-01-20 19:19:53.466125316 +0000 UTC m=+0.086496459 container cleanup d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 20 19:19:53 compute-0 sudo[239232]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:53 compute-0 systemd[1]: libpod-conmon-d02b9989193f3691eb9be524d5bdacdfa30d0d3d387ced80d8b477c12152f1bb.scope: Deactivated successfully.
Jan 20 19:19:53 compute-0 podman[239348]: 2026-01-20 19:19:53.490719354 +0000 UTC m=+0.193204181 container remove 59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=eager_bhaskara, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:19:53 compute-0 systemd[1]: libpod-conmon-59330fe8add68d9c10faab453505ab8b92a75cfd6cc72eff7da0d81b61f6bb46.scope: Deactivated successfully.
Jan 20 19:19:53 compute-0 podman[239449]: 2026-01-20 19:19:53.664615374 +0000 UTC m=+0.051409908 container create 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 20 19:19:53 compute-0 podman[239449]: 2026-01-20 19:19:53.638281615 +0000 UTC m=+0.025076149 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:19:53 compute-0 systemd[1]: Started libpod-conmon-02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4.scope.
Jan 20 19:19:53 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:53 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:19:53 compute-0 podman[239449]: 2026-01-20 19:19:53.790387237 +0000 UTC m=+0.177181791 container init 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:19:53 compute-0 podman[239449]: 2026-01-20 19:19:53.796656169 +0000 UTC m=+0.183450703 container start 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:19:53 compute-0 podman[239449]: 2026-01-20 19:19:53.801039166 +0000 UTC m=+0.187833690 container attach 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:19:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:53 compute-0 sshd-session[214361]: Connection closed by 192.168.122.30 port 52502
Jan 20 19:19:53 compute-0 sshd-session[214358]: pam_unix(sshd:session): session closed for user zuul
Jan 20 19:19:53 compute-0 systemd[1]: session-50.scope: Deactivated successfully.
Jan 20 19:19:53 compute-0 systemd[1]: session-50.scope: Consumed 1min 56.749s CPU time.
Jan 20 19:19:53 compute-0 systemd-logind[797]: Session 50 logged out. Waiting for processes to exit.
Jan 20 19:19:53 compute-0 systemd-logind[797]: Removed session 50.
Jan 20 19:19:54 compute-0 ceph-mon[75120]: pgmap v634: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-045b638c8013481ec7cd0ca172aadaf40382edda30127f7190a425edd5b1becf-merged.mount: Deactivated successfully.
Jan 20 19:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-ee05ca57c0648f38e5192c173d39f2cee12a37bc51f4d3055824e69624120fb3-merged.mount: Deactivated successfully.
Jan 20 19:19:54 compute-0 lvm[239547]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:19:54 compute-0 lvm[239547]: VG ceph_vg1 finished
Jan 20 19:19:54 compute-0 lvm[239546]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:19:54 compute-0 lvm[239546]: VG ceph_vg0 finished
Jan 20 19:19:54 compute-0 lvm[239549]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:19:54 compute-0 lvm[239549]: VG ceph_vg2 finished
Jan 20 19:19:54 compute-0 nova_compute[239038]: 2026-01-20 19:19:54.503 239044 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:19:54 compute-0 nova_compute[239038]: 2026-01-20 19:19:54.504 239044 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:19:54 compute-0 nova_compute[239038]: 2026-01-20 19:19:54.504 239044 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 20 19:19:54 compute-0 nova_compute[239038]: 2026-01-20 19:19:54.504 239044 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 20 19:19:54 compute-0 lvm[239551]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:19:54 compute-0 lvm[239551]: VG ceph_vg2 finished
Jan 20 19:19:54 compute-0 affectionate_kowalevski[239466]: {}
Jan 20 19:19:54 compute-0 lvm[239553]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:19:54 compute-0 lvm[239553]: VG ceph_vg2 finished
Jan 20 19:19:54 compute-0 systemd[1]: libpod-02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4.scope: Deactivated successfully.
Jan 20 19:19:54 compute-0 systemd[1]: libpod-02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4.scope: Consumed 1.310s CPU time.
Jan 20 19:19:54 compute-0 podman[239449]: 2026-01-20 19:19:54.570152962 +0000 UTC m=+0.956947516 container died 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:19:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-82a022a509aacf4d1c48de25e0104baa5b20c04733bd69a64368185bb3350778-merged.mount: Deactivated successfully.
Jan 20 19:19:54 compute-0 podman[239449]: 2026-01-20 19:19:54.622738279 +0000 UTC m=+1.009532813 container remove 02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=affectionate_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:19:54 compute-0 systemd[1]: libpod-conmon-02d7465bc204acc6b89a051db75610a0902db2d639ea5fa80344af025f1425b4.scope: Deactivated successfully.
Jan 20 19:19:54 compute-0 nova_compute[239038]: 2026-01-20 19:19:54.657 239044 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:19:54 compute-0 sudo[239270]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:19:54 compute-0 nova_compute[239038]: 2026-01-20 19:19:54.670 239044 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.014s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:19:54 compute-0 nova_compute[239038]: 2026-01-20 19:19:54.671 239044 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 20 19:19:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:19:54 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:19:54 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:19:54 compute-0 sudo[239569]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:19:54 compute-0 sudo[239569]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:19:54 compute-0 sudo[239569]: pam_unix(sudo:session): session closed for user root
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.089 239044 INFO nova.virt.driver [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.194 239044 INFO nova.compute.provider_config [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.219 239044 DEBUG oslo_concurrency.lockutils [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.219 239044 DEBUG oslo_concurrency.lockutils [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_concurrency.lockutils [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.220 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.221 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.222 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.223 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.224 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.225 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.226 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.227 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.228 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.229 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.230 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.231 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.232 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.233 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.234 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.235 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.236 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.237 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.238 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.239 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.240 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.241 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.242 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.243 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.244 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.245 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.246 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.247 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.248 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.249 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.250 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.251 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.252 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.253 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.254 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.255 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.256 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.257 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.258 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.259 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.260 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.261 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.261 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.261 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.261 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.262 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.263 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.264 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.265 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.266 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.267 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.268 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.269 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.270 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.271 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.272 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.273 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.274 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.275 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.276 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.277 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.278 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.279 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.280 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.281 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.282 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.283 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.284 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.285 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.286 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.287 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.288 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.289 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.290 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.291 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.292 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.293 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.294 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.295 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.296 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.297 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.298 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 WARNING oslo_config.cfg [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 20 19:19:55 compute-0 nova_compute[239038]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 20 19:19:55 compute-0 nova_compute[239038]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 20 19:19:55 compute-0 nova_compute[239038]: and ``live_migration_inbound_addr`` respectively.
Jan 20 19:19:55 compute-0 nova_compute[239038]: ).  Its value may be silently ignored in the future.
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.299 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.300 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.301 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_secret_uuid        = 90fff835-31df-513f-a409-b6642f04e6ac log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.302 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.303 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.304 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.305 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.306 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.307 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.308 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.309 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.310 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.311 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.312 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.313 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.314 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.315 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.316 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.317 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.318 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.319 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.320 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.321 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.322 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.323 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.324 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.325 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.326 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.326 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.326 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.326 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.327 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.328 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.329 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.330 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.331 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.331 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.331 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.331 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.332 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.333 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.334 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.335 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.336 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.337 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.338 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.339 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.340 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.341 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.342 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.343 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.344 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.345 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.346 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.347 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.348 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.349 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.350 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.351 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.352 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.353 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.354 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.355 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.356 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.357 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.358 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.359 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.360 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.361 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.362 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.363 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.363 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.363 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.363 239044 DEBUG oslo_service.service [None req-baaeee0b-c014-462a-88e2-e0b1b42d5c2d - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.364 239044 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.377 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.377 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.378 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.378 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.391 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fdd64e57250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.393 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fdd64e57250> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.394 239044 INFO nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Connection event '1' reason 'None'
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.401 239044 INFO nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host capabilities <capabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]: 
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <host>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <uuid>6fed1acb-e03a-4246-8d49-1248ad1fe57b</uuid>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <arch>x86_64</arch>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model>EPYC-Rome-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <vendor>AMD</vendor>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <microcode version='16777317'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <signature family='23' model='49' stepping='0'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='x2apic'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='tsc-deadline'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='osxsave'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='hypervisor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='tsc_adjust'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='spec-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='stibp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='arch-capabilities'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='cmp_legacy'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='topoext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='virt-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='lbrv'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='tsc-scale'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='vmcb-clean'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='pause-filter'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='pfthreshold'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='svme-addr-chk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='rdctl-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='skip-l1dfl-vmentry'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='mds-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature name='pschange-mc-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <pages unit='KiB' size='4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <pages unit='KiB' size='2048'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <pages unit='KiB' size='1048576'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <power_management>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <suspend_mem/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </power_management>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <iommu support='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <migration_features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <live/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <uri_transports>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <uri_transport>tcp</uri_transport>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <uri_transport>rdma</uri_transport>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </uri_transports>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </migration_features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <topology>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <cells num='1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <cell id='0'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:           <memory unit='KiB'>7864312</memory>
Jan 20 19:19:55 compute-0 nova_compute[239038]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 20 19:19:55 compute-0 nova_compute[239038]:           <pages unit='KiB' size='2048'>0</pages>
Jan 20 19:19:55 compute-0 nova_compute[239038]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 20 19:19:55 compute-0 nova_compute[239038]:           <distances>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <sibling id='0' value='10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:           </distances>
Jan 20 19:19:55 compute-0 nova_compute[239038]:           <cpus num='8'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:           </cpus>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         </cell>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </cells>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </topology>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <cache>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </cache>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <secmodel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model>selinux</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <doi>0</doi>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </secmodel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <secmodel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model>dac</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <doi>0</doi>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </secmodel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </host>
Jan 20 19:19:55 compute-0 nova_compute[239038]: 
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <guest>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <os_type>hvm</os_type>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <arch name='i686'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <wordsize>32</wordsize>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <domain type='qemu'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <domain type='kvm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </arch>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <pae/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <nonpae/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <acpi default='on' toggle='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <apic default='on' toggle='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <cpuselection/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <deviceboot/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <disksnapshot default='on' toggle='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <externalSnapshot/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </guest>
Jan 20 19:19:55 compute-0 nova_compute[239038]: 
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <guest>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <os_type>hvm</os_type>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <arch name='x86_64'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <wordsize>64</wordsize>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <domain type='qemu'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <domain type='kvm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </arch>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <acpi default='on' toggle='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <apic default='on' toggle='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <cpuselection/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <deviceboot/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <disksnapshot default='on' toggle='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <externalSnapshot/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </guest>
Jan 20 19:19:55 compute-0 nova_compute[239038]: 
Jan 20 19:19:55 compute-0 nova_compute[239038]: </capabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]: 
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.408 239044 WARNING nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.409 239044 DEBUG nova.virt.libvirt.volume.mount [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.414 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.431 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 20 19:19:55 compute-0 nova_compute[239038]: <domainCapabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <domain>kvm</domain>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <arch>i686</arch>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <vcpu max='4096'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <iothreads supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <os supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <enum name='firmware'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <loader supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>rom</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pflash</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='readonly'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>yes</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>no</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='secure'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>no</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </loader>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </os>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>on</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>off</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='maximum' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='maximumMigratable'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>on</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>off</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='host-model' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <vendor>AMD</vendor>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='x2apic'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='stibp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='succor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='lbrv'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='custom' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='ClearwaterForest'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ddpd-u'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sha512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ddpd-u'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sha512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Dhyana-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Turin'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibpb-brtype'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbpb'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibpb-brtype'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbpb'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-128'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-256'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-128'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-256'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='KnightsMill'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4fmaps'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4vnniw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512er'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512pf'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='KnightsMill-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4fmaps'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4vnniw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512er'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512pf'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tbm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tbm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='athlon'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='athlon-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='core2duo'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='core2duo-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='coreduo'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='coreduo-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='n270'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='n270-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='phenom'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='phenom-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <memoryBacking supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <enum name='sourceType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>file</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>anonymous</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>memfd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </memoryBacking>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <devices>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <disk supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='diskDevice'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>disk</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>cdrom</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>floppy</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>lun</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='bus'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>fdc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>scsi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>sata</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-non-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </disk>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <graphics supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vnc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>egl-headless</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dbus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </graphics>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <video supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='modelType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vga</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>cirrus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>none</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>bochs</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ramfb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </video>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <hostdev supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='mode'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>subsystem</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='startupPolicy'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>default</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>mandatory</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>requisite</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>optional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='subsysType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pci</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>scsi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='capsType'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='pciBackend'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </hostdev>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <rng supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-non-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>random</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>egd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>builtin</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </rng>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <filesystem supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='driverType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>path</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>handle</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtiofs</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </filesystem>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <tpm supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tpm-tis</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tpm-crb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>emulator</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>external</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendVersion'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>2.0</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </tpm>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <redirdev supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='bus'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </redirdev>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <channel supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pty</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>unix</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </channel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <crypto supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>qemu</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>builtin</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </crypto>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <interface supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>default</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>passt</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </interface>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <panic supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>isa</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>hyperv</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </panic>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <console supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>null</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pty</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dev</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>file</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pipe</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>stdio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>udp</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tcp</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>unix</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>qemu-vdagent</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dbus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </console>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </devices>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <gic supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <vmcoreinfo supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <genid supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <backingStoreInput supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <backup supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <async-teardown supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <s390-pv supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <ps2 supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <tdx supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <sev supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <sgx supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <hyperv supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='features'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>relaxed</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vapic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>spinlocks</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vpindex</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>runtime</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>synic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>stimer</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>reset</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vendor_id</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>frequencies</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>reenlightenment</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tlbflush</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ipi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>avic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>emsr_bitmap</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>xmm_input</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <defaults>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <spinlocks>4095</spinlocks>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <stimer_direct>on</stimer_direct>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </defaults>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </hyperv>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <launchSecurity supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </features>
Jan 20 19:19:55 compute-0 nova_compute[239038]: </domainCapabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.450 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 20 19:19:55 compute-0 nova_compute[239038]: <domainCapabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <domain>kvm</domain>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <arch>i686</arch>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <vcpu max='240'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <iothreads supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <os supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <enum name='firmware'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <loader supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>rom</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pflash</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='readonly'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>yes</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>no</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='secure'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>no</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </loader>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </os>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>on</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>off</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='maximum' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='maximumMigratable'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>on</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>off</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='host-model' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <vendor>AMD</vendor>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='x2apic'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='stibp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='succor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='lbrv'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='custom' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='ClearwaterForest'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ddpd-u'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sha512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ddpd-u'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sha512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Dhyana-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Turin'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibpb-brtype'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbpb'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibpb-brtype'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbpb'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-128'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-256'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-128'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-256'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='KnightsMill'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4fmaps'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4vnniw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512er'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512pf'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='KnightsMill-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4fmaps'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4vnniw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512er'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512pf'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tbm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tbm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='athlon'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='athlon-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='core2duo'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='core2duo-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='coreduo'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='coreduo-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='n270'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='n270-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='phenom'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='phenom-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <memoryBacking supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <enum name='sourceType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>file</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>anonymous</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>memfd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </memoryBacking>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <devices>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <disk supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='diskDevice'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>disk</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>cdrom</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>floppy</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>lun</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='bus'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ide</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>fdc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>scsi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>sata</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-non-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </disk>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <graphics supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vnc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>egl-headless</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dbus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </graphics>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <video supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='modelType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vga</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>cirrus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>none</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>bochs</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ramfb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </video>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <hostdev supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='mode'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>subsystem</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='startupPolicy'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>default</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>mandatory</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>requisite</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>optional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='subsysType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pci</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>scsi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='capsType'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='pciBackend'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </hostdev>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <rng supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-non-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>random</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>egd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>builtin</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </rng>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <filesystem supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='driverType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>path</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>handle</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtiofs</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </filesystem>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <tpm supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tpm-tis</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tpm-crb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>emulator</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>external</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendVersion'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>2.0</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </tpm>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <redirdev supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='bus'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </redirdev>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <channel supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pty</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>unix</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </channel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <crypto supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>qemu</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>builtin</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </crypto>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <interface supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>default</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>passt</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </interface>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <panic supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>isa</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>hyperv</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </panic>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <console supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>null</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pty</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dev</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>file</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pipe</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>stdio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>udp</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tcp</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>unix</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>qemu-vdagent</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dbus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </console>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </devices>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <gic supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <vmcoreinfo supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <genid supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <backingStoreInput supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <backup supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <async-teardown supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <s390-pv supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <ps2 supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <tdx supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <sev supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <sgx supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <hyperv supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='features'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>relaxed</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vapic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>spinlocks</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vpindex</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>runtime</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>synic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>stimer</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>reset</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vendor_id</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>frequencies</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>reenlightenment</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tlbflush</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ipi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>avic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>emsr_bitmap</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>xmm_input</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <defaults>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <spinlocks>4095</spinlocks>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <stimer_direct>on</stimer_direct>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </defaults>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </hyperv>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <launchSecurity supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </features>
Jan 20 19:19:55 compute-0 nova_compute[239038]: </domainCapabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.498 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.503 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 20 19:19:55 compute-0 nova_compute[239038]: <domainCapabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <domain>kvm</domain>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <arch>x86_64</arch>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <vcpu max='240'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <iothreads supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <os supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <enum name='firmware'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <loader supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>rom</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pflash</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='readonly'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>yes</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>no</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='secure'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>no</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </loader>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </os>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>on</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>off</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='maximum' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='maximumMigratable'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>on</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>off</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='host-model' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <vendor>AMD</vendor>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='x2apic'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='stibp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='succor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='lbrv'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='custom' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='ClearwaterForest'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ddpd-u'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sha512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ddpd-u'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sha512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Dhyana-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Turin'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibpb-brtype'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbpb'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibpb-brtype'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbpb'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-128'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-256'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-128'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-256'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='KnightsMill'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4fmaps'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4vnniw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512er'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512pf'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='KnightsMill-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4fmaps'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4vnniw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512er'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512pf'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tbm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tbm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='athlon'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='athlon-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='core2duo'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='core2duo-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='coreduo'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='coreduo-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='n270'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='n270-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='phenom'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='phenom-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <memoryBacking supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <enum name='sourceType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>file</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>anonymous</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>memfd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </memoryBacking>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <devices>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <disk supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='diskDevice'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>disk</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>cdrom</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>floppy</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>lun</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='bus'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ide</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>fdc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>scsi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>sata</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-non-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </disk>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <graphics supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vnc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>egl-headless</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dbus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </graphics>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <video supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='modelType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vga</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>cirrus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>none</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>bochs</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ramfb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </video>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <hostdev supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='mode'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>subsystem</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='startupPolicy'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>default</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>mandatory</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>requisite</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>optional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='subsysType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pci</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>scsi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='capsType'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='pciBackend'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </hostdev>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <rng supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-non-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>random</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>egd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>builtin</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </rng>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <filesystem supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='driverType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>path</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>handle</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtiofs</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </filesystem>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <tpm supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tpm-tis</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tpm-crb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>emulator</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>external</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendVersion'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>2.0</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </tpm>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <redirdev supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='bus'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </redirdev>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <channel supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pty</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>unix</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </channel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <crypto supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>qemu</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>builtin</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </crypto>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <interface supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>default</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>passt</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </interface>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <panic supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>isa</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>hyperv</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </panic>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <console supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>null</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pty</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dev</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>file</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pipe</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>stdio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>udp</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tcp</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>unix</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>qemu-vdagent</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dbus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </console>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </devices>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <gic supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <vmcoreinfo supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <genid supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <backingStoreInput supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <backup supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <async-teardown supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <s390-pv supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <ps2 supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <tdx supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <sev supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <sgx supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <hyperv supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='features'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>relaxed</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vapic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>spinlocks</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vpindex</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>runtime</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>synic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>stimer</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>reset</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vendor_id</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>frequencies</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>reenlightenment</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tlbflush</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ipi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>avic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>emsr_bitmap</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>xmm_input</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <defaults>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <spinlocks>4095</spinlocks>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <stimer_direct>on</stimer_direct>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </defaults>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </hyperv>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <launchSecurity supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </features>
Jan 20 19:19:55 compute-0 nova_compute[239038]: </domainCapabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.582 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 20 19:19:55 compute-0 nova_compute[239038]: <domainCapabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <path>/usr/libexec/qemu-kvm</path>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <domain>kvm</domain>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <arch>x86_64</arch>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <vcpu max='4096'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <iothreads supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <os supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <enum name='firmware'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>efi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <loader supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>rom</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pflash</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='readonly'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>yes</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>no</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='secure'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>yes</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>no</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </loader>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </os>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='host-passthrough' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='hostPassthroughMigratable'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>on</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>off</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='maximum' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='maximumMigratable'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>on</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>off</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='host-model' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <vendor>AMD</vendor>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='x2apic'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc-deadline'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='hypervisor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc_adjust'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='spec-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='stibp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='cmp_legacy'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='overflow-recov'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='succor'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='amd-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='virt-ssbd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='lbrv'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='tsc-scale'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='vmcb-clean'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='flushbyasid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='pause-filter'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='pfthreshold'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='svme-addr-chk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <feature policy='disable' name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <mode name='custom' supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Broadwell-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cascadelake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='ClearwaterForest'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ddpd-u'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sha512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='ClearwaterForest-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ddpd-u'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sha512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm3'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sm4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Cooperlake-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Denverton-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Dhyana-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Genoa-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Milan-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Rome-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Turin'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibpb-brtype'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbpb'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-Turin-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amd-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='auto-ibrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vp2intersect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fs-gs-base-ns'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibpb-brtype'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='no-nested-data-bp'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='null-sel-clr-base'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='perfmon-v2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbpb'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='srso-user-kernel-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='stibp-always-on'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>EPYC-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='EPYC-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-128'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-256'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='GraniteRapids-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-128'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-256'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx10-512'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='prefetchiti'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Haswell-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-noTSX'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v6'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Icelake-Server-v7'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='IvyBridge-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='KnightsMill'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4fmaps'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4vnniw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512er'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512pf'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='KnightsMill-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4fmaps'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-4vnniw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512er'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512pf'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G4-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tbm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Opteron_G5-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fma4'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tbm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xop'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SapphireRapids-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='amx-tile'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-bf16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-fp16'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512-vpopcntdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bitalg'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vbmi2'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrc'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fzrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='la57'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='taa-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='tsx-ldtrk'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>SierraForest-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='SierraForest-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ifma'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-ne-convert'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx-vnni-int8'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bhi-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='bus-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cmpccxadd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fbsdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='fsrs'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ibrs-all'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='intel-psfd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ipred-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='lam'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mcdt-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pbrsb-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='psdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rrsba-ctrl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='sbdr-ssdp-no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='serialize'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vaes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='vpclmulqdq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Client-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='hle'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='rtm'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Skylake-Server-v5'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512bw'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512cd'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512dq'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512f'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='avx512vl'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='invpcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pcid'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='pku'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='mpx'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v2'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v3'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='core-capability'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='split-lock-detect'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='Snowridge-v4'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='cldemote'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='erms'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='gfni'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdir64b'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='movdiri'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='xsaves'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='athlon'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='athlon-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='core2duo'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='core2duo-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='coreduo'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='coreduo-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='n270'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='n270-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='ss'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='phenom'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <blockers model='phenom-v1'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnow'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <feature name='3dnowext'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </blockers>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </mode>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </cpu>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <memoryBacking supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <enum name='sourceType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>file</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>anonymous</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <value>memfd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </memoryBacking>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <devices>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <disk supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='diskDevice'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>disk</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>cdrom</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>floppy</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>lun</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='bus'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>fdc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>scsi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>sata</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-non-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </disk>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <graphics supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vnc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>egl-headless</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dbus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </graphics>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <video supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='modelType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vga</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>cirrus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>none</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>bochs</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ramfb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </video>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <hostdev supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='mode'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>subsystem</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='startupPolicy'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>default</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>mandatory</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>requisite</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>optional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='subsysType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pci</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>scsi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='capsType'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='pciBackend'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </hostdev>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <rng supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtio-non-transitional</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>random</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>egd</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>builtin</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </rng>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <filesystem supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='driverType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>path</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>handle</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>virtiofs</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </filesystem>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <tpm supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tpm-tis</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tpm-crb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>emulator</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>external</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendVersion'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>2.0</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </tpm>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <redirdev supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='bus'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>usb</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </redirdev>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <channel supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pty</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>unix</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </channel>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <crypto supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>qemu</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendModel'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>builtin</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </crypto>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <interface supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='backendType'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>default</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>passt</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </interface>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <panic supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='model'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>isa</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>hyperv</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </panic>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <console supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='type'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>null</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vc</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pty</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dev</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>file</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>pipe</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>stdio</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>udp</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tcp</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>unix</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>qemu-vdagent</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>dbus</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </console>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </devices>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   <features>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <gic supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <vmcoreinfo supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <genid supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <backingStoreInput supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <backup supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <async-teardown supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <s390-pv supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <ps2 supported='yes'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <tdx supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <sev supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <sgx supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <hyperv supported='yes'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <enum name='features'>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>relaxed</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vapic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>spinlocks</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vpindex</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>runtime</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>synic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>stimer</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>reset</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>vendor_id</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>frequencies</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>reenlightenment</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>tlbflush</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>ipi</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>avic</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>emsr_bitmap</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <value>xmm_input</value>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </enum>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       <defaults>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <spinlocks>4095</spinlocks>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <stimer_direct>on</stimer_direct>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <tlbflush_direct>on</tlbflush_direct>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <tlbflush_extended>on</tlbflush_extended>
Jan 20 19:19:55 compute-0 nova_compute[239038]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 20 19:19:55 compute-0 nova_compute[239038]:       </defaults>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     </hyperv>
Jan 20 19:19:55 compute-0 nova_compute[239038]:     <launchSecurity supported='no'/>
Jan 20 19:19:55 compute-0 nova_compute[239038]:   </features>
Jan 20 19:19:55 compute-0 nova_compute[239038]: </domainCapabilities>
Jan 20 19:19:55 compute-0 nova_compute[239038]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.660 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.660 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.660 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.666 239044 INFO nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Secure Boot support detected
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.668 239044 INFO nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.668 239044 INFO nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.675 239044 DEBUG nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.710 239044 INFO nova.virt.node [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Determined node identity 178956bf-6050-42b7-876f-3f96271cf4ff from /var/lib/nova/compute_id
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.729 239044 WARNING nova.compute.manager [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Compute nodes ['178956bf-6050-42b7-876f-3f96271cf4ff'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.762 239044 INFO nova.compute.manager [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.792 239044 WARNING nova.compute.manager [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.792 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.793 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.793 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.793 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:19:55 compute-0 nova_compute[239038]: 2026-01-20 19:19:55.793 239044 DEBUG oslo_concurrency.processutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:19:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:56 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:19:56 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3114600400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.336 239044 DEBUG oslo_concurrency.processutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:19:56 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 20 19:19:56 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.647 239044 WARNING nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.648 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5157MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.649 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.649 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.664 239044 WARNING nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] No compute node record for compute-0.ctlplane.example.com:178956bf-6050-42b7-876f-3f96271cf4ff: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 178956bf-6050-42b7-876f-3f96271cf4ff could not be found.
Jan 20 19:19:56 compute-0 ceph-mon[75120]: pgmap v635: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:56 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3114600400' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.680 239044 INFO nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 178956bf-6050-42b7-876f-3f96271cf4ff
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.735 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:19:56 compute-0 nova_compute[239038]: 2026-01-20 19:19:56.735 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:19:57 compute-0 nova_compute[239038]: 2026-01-20 19:19:57.574 239044 INFO nova.scheduler.client.report [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] [req-0aabd44f-9296-4a48-bff3-e34edea8db97] Created resource provider record via placement API for resource provider with UUID 178956bf-6050-42b7-876f-3f96271cf4ff and name compute-0.ctlplane.example.com.
Jan 20 19:19:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:57 compute-0 nova_compute[239038]: 2026-01-20 19:19:57.985 239044 DEBUG oslo_concurrency.processutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:19:58 compute-0 ceph-mon[75120]: pgmap v636: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:19:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:19:58 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3012488376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.516 239044 DEBUG oslo_concurrency.processutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.521 239044 DEBUG nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 20 19:19:58 compute-0 nova_compute[239038]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.521 239044 INFO nova.virt.libvirt.host [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] kernel doesn't support AMD SEV
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.522 239044 DEBUG nova.compute.provider_tree [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updating inventory in ProviderTree for provider 178956bf-6050-42b7-876f-3f96271cf4ff with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.523 239044 DEBUG nova.virt.libvirt.driver [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.657 239044 DEBUG nova.scheduler.client.report [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updated inventory for provider 178956bf-6050-42b7-876f-3f96271cf4ff with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.657 239044 DEBUG nova.compute.provider_tree [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updating resource provider 178956bf-6050-42b7-876f-3f96271cf4ff generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.657 239044 DEBUG nova.compute.provider_tree [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updating inventory in ProviderTree for provider 178956bf-6050-42b7-876f-3f96271cf4ff with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.741 239044 DEBUG nova.compute.provider_tree [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Updating resource provider 178956bf-6050-42b7-876f-3f96271cf4ff generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.765 239044 DEBUG nova.compute.resource_tracker [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.765 239044 DEBUG oslo_concurrency.lockutils [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.765 239044 DEBUG nova.service [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.871 239044 DEBUG nova.service [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 20 19:19:58 compute-0 nova_compute[239038]: 2026-01-20 19:19:58.872 239044 DEBUG nova.servicegroup.drivers.db [None req-f3915a92-1272-44ab-b713-c9ef75ecba55 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 20 19:19:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:19:59 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3012488376' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:19:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:00 compute-0 ceph-mon[75120]: pgmap v637: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:02 compute-0 ceph-mon[75120]: pgmap v638: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:04 compute-0 ceph-mon[75120]: pgmap v639: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:20:05.445 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:20:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:20:05.446 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:20:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:20:05.446 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:20:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:06 compute-0 ceph-mon[75120]: pgmap v640: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:08 compute-0 ceph-mon[75120]: pgmap v641: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:08 compute-0 podman[239685]: 2026-01-20 19:20:08.08217267 +0000 UTC m=+0.086470389 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:20:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:09 compute-0 podman[239712]: 2026-01-20 19:20:09.377790295 +0000 UTC m=+0.058532641 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 20 19:20:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:10 compute-0 ceph-mon[75120]: pgmap v642: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:12 compute-0 ceph-mon[75120]: pgmap v643: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:14 compute-0 ceph-mon[75120]: pgmap v644: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:16 compute-0 ceph-mon[75120]: pgmap v645: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:20:17 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394021803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:20:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:20:17 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2394021803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:20:18 compute-0 ceph-mon[75120]: pgmap v646: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/2394021803' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:20:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/2394021803' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:20:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:20:18 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3977705230' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:20:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:20:18 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3977705230' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:20:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:20:18 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2030412672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:20:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:20:18 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2030412672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:20:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:19 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/3977705230' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:20:19 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/3977705230' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:20:19 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/2030412672' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:20:19 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/2030412672' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:20:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:20 compute-0 ceph-mon[75120]: pgmap v647: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:23 compute-0 ceph-mon[75120]: pgmap v648: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:25 compute-0 ceph-mon[75120]: pgmap v649: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:25 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:27 compute-0 ceph-mon[75120]: pgmap v650: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:27 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:29 compute-0 ceph-mon[75120]: pgmap v651: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:29 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:31 compute-0 ceph-mon[75120]: pgmap v652: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:20:31
Jan 20 19:20:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:20:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:20:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['.rgw.root', 'backups', 'cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'images', 'default.rgw.control', 'cephfs.cephfs.data', 'volumes']
Jan 20 19:20:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:20:31 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:33 compute-0 ceph-mon[75120]: pgmap v653: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:33 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:20:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:20:35 compute-0 ceph-mon[75120]: pgmap v654: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:35 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:37 compute-0 ceph-mon[75120]: pgmap v655: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:37 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:38 compute-0 podman[239732]: 2026-01-20 19:20:38.406181773 +0000 UTC m=+0.077757587 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 20 19:20:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:39 compute-0 ceph-mon[75120]: pgmap v656: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:39 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:40 compute-0 podman[239758]: 2026-01-20 19:20:40.386348705 +0000 UTC m=+0.056238677 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:20:41 compute-0 ceph-mon[75120]: pgmap v657: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:41 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:43 compute-0 ceph-mon[75120]: pgmap v658: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:43 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:20:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:20:45 compute-0 ceph-mon[75120]: pgmap v659: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:45 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:46 compute-0 nova_compute[239038]: 2026-01-20 19:20:46.874 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:47 compute-0 nova_compute[239038]: 2026-01-20 19:20:47.014 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:47 compute-0 ceph-mon[75120]: pgmap v660: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:47 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:49 compute-0 ceph-mon[75120]: pgmap v661: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 20 19:20:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3981231077' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 20 19:20:49 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:20:49 compute-0 ceph-mgr[75417]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:20:49 compute-0 ceph-mgr[75417]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:20:49 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/3981231077' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 20 19:20:50 compute-0 ceph-mon[75120]: from='client.14338 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:20:51 compute-0 ceph-mon[75120]: pgmap v662: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:51 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:53 compute-0 ceph-mon[75120]: pgmap v663: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:53 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.684 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.685 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.685 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.685 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.698 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.698 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.699 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.700 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.700 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.720 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.720 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.721 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.721 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:20:54 compute-0 nova_compute[239038]: 2026-01-20 19:20:54.722 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:20:54 compute-0 sudo[239778]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:54 compute-0 sudo[239778]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:54 compute-0 sudo[239778]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:54 compute-0 sudo[239803]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:20:54 compute-0 sudo[239803]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:20:55 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4031378403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:20:55 compute-0 nova_compute[239038]: 2026-01-20 19:20:55.286 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:20:55 compute-0 sudo[239803]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:20:55 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:20:55 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:20:55 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:20:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:20:55 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:20:55 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:20:55 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:20:55 compute-0 nova_compute[239038]: 2026-01-20 19:20:55.442 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:20:55 compute-0 nova_compute[239038]: 2026-01-20 19:20:55.443 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5166MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:20:55 compute-0 nova_compute[239038]: 2026-01-20 19:20:55.443 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:20:55 compute-0 nova_compute[239038]: 2026-01-20 19:20:55.443 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:20:55 compute-0 sudo[239879]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:55 compute-0 sudo[239879]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:55 compute-0 sudo[239879]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:55 compute-0 ceph-mon[75120]: pgmap v664: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:55 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/4031378403' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:20:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:20:55 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:20:55 compute-0 sudo[239904]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:20:55 compute-0 sudo[239904]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:55 compute-0 nova_compute[239038]: 2026-01-20 19:20:55.537 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:20:55 compute-0 nova_compute[239038]: 2026-01-20 19:20:55.537 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:20:55 compute-0 nova_compute[239038]: 2026-01-20 19:20:55.557 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:20:55 compute-0 podman[239961]: 2026-01-20 19:20:55.811764473 +0000 UTC m=+0.041552560 container create 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:55 compute-0 systemd[1]: Started libpod-conmon-2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471.scope.
Jan 20 19:20:55 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:55 compute-0 podman[239961]: 2026-01-20 19:20:55.792519521 +0000 UTC m=+0.022307627 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:20:55 compute-0 podman[239961]: 2026-01-20 19:20:55.904227055 +0000 UTC m=+0.134015161 container init 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:20:55 compute-0 podman[239961]: 2026-01-20 19:20:55.911384747 +0000 UTC m=+0.141172833 container start 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:20:55 compute-0 podman[239961]: 2026-01-20 19:20:55.915468986 +0000 UTC m=+0.145257092 container attach 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:20:55 compute-0 elastic_einstein[239977]: 167 167
Jan 20 19:20:55 compute-0 systemd[1]: libpod-2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471.scope: Deactivated successfully.
Jan 20 19:20:55 compute-0 podman[239961]: 2026-01-20 19:20:55.918497918 +0000 UTC m=+0.148285994 container died 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:20:55 compute-0 systemd[1]: var-lib-containers-storage-overlay-a9bd36c412d7c13510533ba24ce4392f2a174be136c4bd48d12ce171f7e17910-merged.mount: Deactivated successfully.
Jan 20 19:20:55 compute-0 podman[239961]: 2026-01-20 19:20:55.957833283 +0000 UTC m=+0.187621369 container remove 2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_einstein, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:20:55 compute-0 systemd[1]: libpod-conmon-2778fc26213fc390d174915a6abc65bdd9d663086de53d76768156626c29f471.scope: Deactivated successfully.
Jan 20 19:20:55 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:56 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:20:56 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3836286012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:20:56 compute-0 podman[240003]: 2026-01-20 19:20:56.118951655 +0000 UTC m=+0.049162612 container create f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:20:56 compute-0 nova_compute[239038]: 2026-01-20 19:20:56.128 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:20:56 compute-0 nova_compute[239038]: 2026-01-20 19:20:56.136 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:20:56 compute-0 nova_compute[239038]: 2026-01-20 19:20:56.156 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:20:56 compute-0 systemd[1]: Started libpod-conmon-f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a.scope.
Jan 20 19:20:56 compute-0 nova_compute[239038]: 2026-01-20 19:20:56.157 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:20:56 compute-0 nova_compute[239038]: 2026-01-20 19:20:56.157 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:20:56 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:56 compute-0 podman[240003]: 2026-01-20 19:20:56.098702229 +0000 UTC m=+0.028913216 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:20:56 compute-0 podman[240003]: 2026-01-20 19:20:56.194822008 +0000 UTC m=+0.125032975 container init f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:20:56 compute-0 podman[240003]: 2026-01-20 19:20:56.201848577 +0000 UTC m=+0.132059534 container start f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:20:56 compute-0 podman[240003]: 2026-01-20 19:20:56.205197258 +0000 UTC m=+0.135408215 container attach f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 19:20:56 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3836286012' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:20:56 compute-0 competent_haibt[240021]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:20:56 compute-0 competent_haibt[240021]: --> All data devices are unavailable
Jan 20 19:20:56 compute-0 systemd[1]: libpod-f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a.scope: Deactivated successfully.
Jan 20 19:20:56 compute-0 podman[240003]: 2026-01-20 19:20:56.678880401 +0000 UTC m=+0.609091358 container died f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:20:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-4de9899fd5399446f320af1d4c4f4e0631ab212b847c6fdbb2f1dcf8bb36f932-merged.mount: Deactivated successfully.
Jan 20 19:20:56 compute-0 podman[240003]: 2026-01-20 19:20:56.744481028 +0000 UTC m=+0.674691985 container remove f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=competent_haibt, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:20:56 compute-0 systemd[1]: libpod-conmon-f7bfc7a6b059bac4718acb611555cc59d8f09ed08dcc61a57fa62dd96dc2590a.scope: Deactivated successfully.
Jan 20 19:20:56 compute-0 sudo[239904]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:56 compute-0 sudo[240053]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:56 compute-0 sudo[240053]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:56 compute-0 sudo[240053]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:56 compute-0 sudo[240078]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:20:56 compute-0 sudo[240078]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:57 compute-0 podman[240116]: 2026-01-20 19:20:57.23688214 +0000 UTC m=+0.042581364 container create 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 19:20:57 compute-0 systemd[1]: Started libpod-conmon-06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095.scope.
Jan 20 19:20:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:57 compute-0 podman[240116]: 2026-01-20 19:20:57.217154916 +0000 UTC m=+0.022854170 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:20:57 compute-0 podman[240116]: 2026-01-20 19:20:57.341989326 +0000 UTC m=+0.147688610 container init 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:20:57 compute-0 podman[240116]: 2026-01-20 19:20:57.351208048 +0000 UTC m=+0.156907292 container start 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:20:57 compute-0 podman[240116]: 2026-01-20 19:20:57.354735033 +0000 UTC m=+0.160434317 container attach 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:20:57 compute-0 thirsty_sammet[240133]: 167 167
Jan 20 19:20:57 compute-0 systemd[1]: libpod-06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095.scope: Deactivated successfully.
Jan 20 19:20:57 compute-0 podman[240116]: 2026-01-20 19:20:57.357712754 +0000 UTC m=+0.163411988 container died 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Jan 20 19:20:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-23640d1a69d3708fe6ce3aa85a7495cf3b203d618ea468f3aeff9d9edaa06ee6-merged.mount: Deactivated successfully.
Jan 20 19:20:57 compute-0 podman[240116]: 2026-01-20 19:20:57.43537809 +0000 UTC m=+0.241077334 container remove 06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=thirsty_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:20:57 compute-0 systemd[1]: libpod-conmon-06c575cb5ab28d0e49fb2e73220963bf35beb776fec36809fc34d239f239d095.scope: Deactivated successfully.
Jan 20 19:20:57 compute-0 ceph-mon[75120]: pgmap v665: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:57 compute-0 podman[240159]: 2026-01-20 19:20:57.634075345 +0000 UTC m=+0.048919026 container create c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:20:57 compute-0 systemd[1]: Started libpod-conmon-c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d.scope.
Jan 20 19:20:57 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:57 compute-0 podman[240159]: 2026-01-20 19:20:57.613422589 +0000 UTC m=+0.028266320 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:20:57 compute-0 podman[240159]: 2026-01-20 19:20:57.71208866 +0000 UTC m=+0.126932361 container init c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:20:57 compute-0 podman[240159]: 2026-01-20 19:20:57.720176205 +0000 UTC m=+0.135019886 container start c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 19:20:57 compute-0 podman[240159]: 2026-01-20 19:20:57.724085538 +0000 UTC m=+0.138929219 container attach c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:20:57 compute-0 pensive_poitras[240175]: {
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:     "0": [
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:         {
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "devices": [
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "/dev/loop3"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             ],
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_name": "ceph_lv0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_size": "21470642176",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "name": "ceph_lv0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "tags": {
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cluster_name": "ceph",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.crush_device_class": "",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.encrypted": "0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.objectstore": "bluestore",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osd_id": "0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.type": "block",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.vdo": "0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.with_tpm": "0"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             },
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "type": "block",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "vg_name": "ceph_vg0"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:         }
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:     ],
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:     "1": [
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:         {
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "devices": [
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "/dev/loop4"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             ],
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_name": "ceph_lv1",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_size": "21470642176",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "name": "ceph_lv1",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "tags": {
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cluster_name": "ceph",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.crush_device_class": "",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.encrypted": "0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.objectstore": "bluestore",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osd_id": "1",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.type": "block",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.vdo": "0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.with_tpm": "0"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             },
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "type": "block",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "vg_name": "ceph_vg1"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:         }
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:     ],
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:     "2": [
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:         {
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "devices": [
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "/dev/loop5"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             ],
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_name": "ceph_lv2",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_size": "21470642176",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "name": "ceph_lv2",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "tags": {
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.cluster_name": "ceph",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.crush_device_class": "",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.encrypted": "0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.objectstore": "bluestore",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osd_id": "2",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.type": "block",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.vdo": "0",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:                 "ceph.with_tpm": "0"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             },
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "type": "block",
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:             "vg_name": "ceph_vg2"
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:         }
Jan 20 19:20:57 compute-0 pensive_poitras[240175]:     ]
Jan 20 19:20:57 compute-0 pensive_poitras[240175]: }
Jan 20 19:20:57 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:58 compute-0 systemd[1]: libpod-c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d.scope: Deactivated successfully.
Jan 20 19:20:58 compute-0 podman[240159]: 2026-01-20 19:20:58.015874361 +0000 UTC m=+0.430718052 container died c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-be0cf313c218f3c54f218c44903351b7b2e3a65e9d7efd4535c0469def59138e-merged.mount: Deactivated successfully.
Jan 20 19:20:58 compute-0 podman[240159]: 2026-01-20 19:20:58.071409125 +0000 UTC m=+0.486252806 container remove c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=pensive_poitras, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:20:58 compute-0 systemd[1]: libpod-conmon-c5971233df131780b50b0048510d3e83e287f5f5927f32aa8854f8e7854e675d.scope: Deactivated successfully.
Jan 20 19:20:58 compute-0 sudo[240078]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:58 compute-0 sudo[240196]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:20:58 compute-0 sudo[240196]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:58 compute-0 sudo[240196]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:58 compute-0 sudo[240221]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:20:58 compute-0 sudo[240221]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:20:58 compute-0 podman[240258]: 2026-01-20 19:20:58.540021146 +0000 UTC m=+0.038665699 container create 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:58 compute-0 systemd[1]: Started libpod-conmon-18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875.scope.
Jan 20 19:20:58 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:58 compute-0 podman[240258]: 2026-01-20 19:20:58.521977042 +0000 UTC m=+0.020621615 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:20:58 compute-0 podman[240258]: 2026-01-20 19:20:58.623497593 +0000 UTC m=+0.122142166 container init 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:20:58 compute-0 podman[240258]: 2026-01-20 19:20:58.631200357 +0000 UTC m=+0.129844900 container start 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:20:58 compute-0 podman[240258]: 2026-01-20 19:20:58.635932531 +0000 UTC m=+0.134577194 container attach 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:20:58 compute-0 practical_gauss[240274]: 167 167
Jan 20 19:20:58 compute-0 systemd[1]: libpod-18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875.scope: Deactivated successfully.
Jan 20 19:20:58 compute-0 conmon[240274]: conmon 18eedb9b2479e447dc73 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875.scope/container/memory.events
Jan 20 19:20:58 compute-0 podman[240258]: 2026-01-20 19:20:58.640959172 +0000 UTC m=+0.139603745 container died 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Jan 20 19:20:58 compute-0 systemd[1]: var-lib-containers-storage-overlay-6a59be3d97d7b732fa7622db56ab59d54fcd8aebb74663b73ac0653a0e288415-merged.mount: Deactivated successfully.
Jan 20 19:20:58 compute-0 podman[240258]: 2026-01-20 19:20:58.752650896 +0000 UTC m=+0.251295459 container remove 18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 20 19:20:58 compute-0 systemd[1]: libpod-conmon-18eedb9b2479e447dc733fbf1b08932fa4742ad045b3a3c4cca61dc68584e875.scope: Deactivated successfully.
Jan 20 19:20:58 compute-0 podman[240297]: 2026-01-20 19:20:58.913128763 +0000 UTC m=+0.045991077 container create 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:20:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:20:58 compute-0 systemd[1]: Started libpod-conmon-3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf.scope.
Jan 20 19:20:58 compute-0 podman[240297]: 2026-01-20 19:20:58.892675611 +0000 UTC m=+0.025537925 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:20:59 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:59 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:20:59 compute-0 podman[240297]: 2026-01-20 19:20:59.056836806 +0000 UTC m=+0.189699130 container init 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:59 compute-0 podman[240297]: 2026-01-20 19:20:59.068075476 +0000 UTC m=+0.200937780 container start 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2)
Jan 20 19:20:59 compute-0 podman[240297]: 2026-01-20 19:20:59.072477471 +0000 UTC m=+0.205339785 container attach 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 19:20:59 compute-0 ceph-mon[75120]: pgmap v666: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:59 compute-0 lvm[240392]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:20:59 compute-0 lvm[240392]: VG ceph_vg1 finished
Jan 20 19:20:59 compute-0 lvm[240391]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:20:59 compute-0 lvm[240391]: VG ceph_vg0 finished
Jan 20 19:20:59 compute-0 lvm[240394]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:20:59 compute-0 lvm[240394]: VG ceph_vg2 finished
Jan 20 19:20:59 compute-0 brave_darwin[240313]: {}
Jan 20 19:20:59 compute-0 systemd[1]: libpod-3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf.scope: Deactivated successfully.
Jan 20 19:20:59 compute-0 systemd[1]: libpod-3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf.scope: Consumed 1.315s CPU time.
Jan 20 19:20:59 compute-0 podman[240297]: 2026-01-20 19:20:59.846005731 +0000 UTC m=+0.978868025 container died 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:20:59 compute-0 systemd[1]: var-lib-containers-storage-overlay-68ea9e10dc33b5edfe713b26da382aba27792c8e58ad3083da4f0f255e121acc-merged.mount: Deactivated successfully.
Jan 20 19:20:59 compute-0 podman[240297]: 2026-01-20 19:20:59.886236228 +0000 UTC m=+1.019098522 container remove 3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=brave_darwin, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:20:59 compute-0 systemd[1]: libpod-conmon-3cf766b20b4a72a809d11c1ef4e27c0f4c69abbc223622e8ab4f8130adbb63cf.scope: Deactivated successfully.
Jan 20 19:20:59 compute-0 sudo[240221]: pam_unix(sudo:session): session closed for user root
Jan 20 19:20:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:20:59 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:20:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:20:59 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:20:59 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:21:00 compute-0 sudo[240408]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:21:00 compute-0 sudo[240408]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:21:00 compute-0 sudo[240408]: pam_unix(sudo:session): session closed for user root
Jan 20 19:21:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:21:00 compute-0 ceph-mon[75120]: pgmap v667: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:00 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:21:01 compute-0 sshd-session[240433]: Invalid user ubuntu from 45.148.10.240 port 53670
Jan 20 19:21:01 compute-0 sshd-session[240433]: Connection closed by invalid user ubuntu 45.148.10.240 port 53670 [preauth]
Jan 20 19:21:01 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:03 compute-0 ceph-mon[75120]: pgmap v668: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:03 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:05 compute-0 ceph-mon[75120]: pgmap v669: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:21:05.446 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:21:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:21:05.448 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:21:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:21:05.448 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:21:05 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:07 compute-0 ceph-mon[75120]: pgmap v670: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:07 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:08 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Jan 20 19:21:08 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:08.938712) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:21:08 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Jan 20 19:21:08 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936868938792, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1218, "num_deletes": 505, "total_data_size": 1360323, "memory_usage": 1392080, "flush_reason": "Manual Compaction"}
Jan 20 19:21:08 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Jan 20 19:21:08 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936868955317, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1336199, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13645, "largest_seqno": 14862, "table_properties": {"data_size": 1330829, "index_size": 2318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14112, "raw_average_key_size": 17, "raw_value_size": 1318053, "raw_average_value_size": 1679, "num_data_blocks": 106, "num_entries": 785, "num_filter_entries": 785, "num_deletions": 505, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936784, "oldest_key_time": 1768936784, "file_creation_time": 1768936868, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:21:08 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 16883 microseconds, and 6109 cpu microseconds.
Jan 20 19:21:08 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:08.955592) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1336199 bytes OK
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:08.955632) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.000183) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.000272) EVENT_LOG_v1 {"time_micros": 1768936869000256, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.000313) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1353668, prev total WAL file size 1353668, number of live WAL files 2.
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.001682) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323531' seq:0, type:0; will stop at (end)
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1304KB)], [32(7749KB)]
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936869001743, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9271246, "oldest_snapshot_seqno": -1}
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3823 keys, 7353896 bytes, temperature: kUnknown
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936869117003, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7353896, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7326665, "index_size": 16561, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9605, "raw_key_size": 93646, "raw_average_key_size": 24, "raw_value_size": 7255794, "raw_average_value_size": 1897, "num_data_blocks": 701, "num_entries": 3823, "num_filter_entries": 3823, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768936869, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.117774) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7353896 bytes
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.119438) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 80.1 rd, 63.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.6 +0.0 blob) out(7.0 +0.0 blob), read-write-amplify(12.4) write-amplify(5.5) OK, records in: 4846, records dropped: 1023 output_compression: NoCompression
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.119489) EVENT_LOG_v1 {"time_micros": 1768936869119457, "job": 14, "event": "compaction_finished", "compaction_time_micros": 115809, "compaction_time_cpu_micros": 32001, "output_level": 6, "num_output_files": 1, "total_output_size": 7353896, "num_input_records": 4846, "num_output_records": 3823, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936869119979, "job": 14, "event": "table_file_deletion", "file_number": 34}
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768936869121847, "job": 14, "event": "table_file_deletion", "file_number": 32}
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.001468) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:09 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:21:09.122034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:21:09 compute-0 podman[240435]: 2026-01-20 19:21:09.442556408 +0000 UTC m=+0.114063462 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 20 19:21:09 compute-0 ceph-mon[75120]: pgmap v671: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:09 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:11 compute-0 podman[240461]: 2026-01-20 19:21:11.371532643 +0000 UTC m=+0.046608212 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 20 19:21:11 compute-0 ceph-mon[75120]: pgmap v672: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:11 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:13 compute-0 ceph-mon[75120]: pgmap v673: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:13 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0)
Jan 20 19:21:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1378073320' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 20 19:21:14 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:21:14 compute-0 ceph-mgr[75417]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:21:14 compute-0 ceph-mgr[75417]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Jan 20 19:21:14 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/1378073320' entity='client.openstack' cmd={"prefix": "version", "format": "json"} : dispatch
Jan 20 19:21:15 compute-0 ceph-mon[75120]: pgmap v674: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:15 compute-0 ceph-mon[75120]: from='client.14344 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Jan 20 19:21:15 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:17 compute-0 ceph-mon[75120]: pgmap v675: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:17 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:19 compute-0 ceph-mon[75120]: pgmap v676: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:19 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:21 compute-0 ceph-mon[75120]: pgmap v677: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:21 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:23 compute-0 ceph-mon[75120]: pgmap v678: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:23 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:25 compute-0 ceph-mon[75120]: pgmap v679: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:26 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:27 compute-0 ceph-mon[75120]: pgmap v680: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:28 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:29 compute-0 ceph-mon[75120]: pgmap v681: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:30 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:31 compute-0 ceph-mon[75120]: pgmap v682: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:21:31
Jan 20 19:21:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:21:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:21:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.log', 'backups', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'images']
Jan 20 19:21:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:21:32 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:33 compute-0 ceph-mon[75120]: pgmap v683: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:21:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:21:35 compute-0 ceph-mon[75120]: pgmap v684: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:36 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:37 compute-0 ceph-mon[75120]: pgmap v685: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:38 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:39 compute-0 ceph-mon[75120]: pgmap v686: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:40 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:40 compute-0 podman[240482]: 2026-01-20 19:21:40.404951698 +0000 UTC m=+0.079133803 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 20 19:21:41 compute-0 ceph-mon[75120]: pgmap v687: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:42 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:42 compute-0 podman[240508]: 2026-01-20 19:21:42.389487429 +0000 UTC m=+0.055858874 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 20 19:21:43 compute-0 ceph-mon[75120]: pgmap v688: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:21:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:21:45 compute-0 ceph-mon[75120]: pgmap v689: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:46 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:47 compute-0 ceph-mon[75120]: pgmap v690: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:48 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:49 compute-0 ceph-mon[75120]: pgmap v691: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:21:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2002419222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:21:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:21:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2002419222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:21:50 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/2002419222' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:21:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/2002419222' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:21:51 compute-0 ceph-mon[75120]: pgmap v692: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:52 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:53 compute-0 ceph-mon[75120]: pgmap v693: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:54 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:55 compute-0 ceph-mon[75120]: pgmap v694: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:56 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.149 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.150 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.181 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.181 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.182 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.198 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.198 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.199 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.199 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.199 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.712 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.713 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.713 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.713 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:21:56 compute-0 nova_compute[239038]: 2026-01-20 19:21:56.714 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:21:57 compute-0 ceph-mon[75120]: pgmap v695: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:57 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:21:57 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1838349769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:21:57 compute-0 nova_compute[239038]: 2026-01-20 19:21:57.260 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:21:57 compute-0 nova_compute[239038]: 2026-01-20 19:21:57.413 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:21:57 compute-0 nova_compute[239038]: 2026-01-20 19:21:57.415 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5176MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:21:57 compute-0 nova_compute[239038]: 2026-01-20 19:21:57.415 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:21:57 compute-0 nova_compute[239038]: 2026-01-20 19:21:57.415 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:21:57 compute-0 nova_compute[239038]: 2026-01-20 19:21:57.481 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:21:57 compute-0 nova_compute[239038]: 2026-01-20 19:21:57.481 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:21:57 compute-0 nova_compute[239038]: 2026-01-20 19:21:57.494 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:21:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:21:58 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/209206065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:21:58 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:21:58 compute-0 nova_compute[239038]: 2026-01-20 19:21:58.033 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:21:58 compute-0 nova_compute[239038]: 2026-01-20 19:21:58.041 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:21:58 compute-0 nova_compute[239038]: 2026-01-20 19:21:58.060 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:21:58 compute-0 nova_compute[239038]: 2026-01-20 19:21:58.061 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:21:58 compute-0 nova_compute[239038]: 2026-01-20 19:21:58.061 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:21:58 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1838349769' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:21:58 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/209206065' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:21:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:21:59 compute-0 ceph-mon[75120]: pgmap v696: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:00 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:00 compute-0 sudo[240571]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:22:00 compute-0 sudo[240571]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:00 compute-0 sudo[240571]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:00 compute-0 sudo[240596]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:22:00 compute-0 sudo[240596]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:00 compute-0 sudo[240596]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:22:00 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:22:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:22:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:22:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:22:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:22:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:22:00 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:22:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:22:00 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:22:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:22:00 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:22:00 compute-0 sudo[240653]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:22:00 compute-0 sudo[240653]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:00 compute-0 sudo[240653]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:00 compute-0 sudo[240678]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:22:00 compute-0 sudo[240678]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:01 compute-0 podman[240715]: 2026-01-20 19:22:01.209549967 +0000 UTC m=+0.044295275 container create d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:22:01 compute-0 ceph-mon[75120]: pgmap v697: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:01 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:22:01 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:22:01 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:22:01 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:22:01 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:22:01 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:22:01 compute-0 systemd[1]: Started libpod-conmon-d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7.scope.
Jan 20 19:22:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:01 compute-0 podman[240715]: 2026-01-20 19:22:01.188296766 +0000 UTC m=+0.023042104 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:22:01 compute-0 podman[240715]: 2026-01-20 19:22:01.295330559 +0000 UTC m=+0.130075887 container init d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:22:01 compute-0 podman[240715]: 2026-01-20 19:22:01.307351407 +0000 UTC m=+0.142096715 container start d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0)
Jan 20 19:22:01 compute-0 podman[240715]: 2026-01-20 19:22:01.310619716 +0000 UTC m=+0.145365054 container attach d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Jan 20 19:22:01 compute-0 reverent_kare[240731]: 167 167
Jan 20 19:22:01 compute-0 systemd[1]: libpod-d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7.scope: Deactivated successfully.
Jan 20 19:22:01 compute-0 conmon[240731]: conmon d6f282ef2e1ef2eb1dce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7.scope/container/memory.events
Jan 20 19:22:01 compute-0 podman[240715]: 2026-01-20 19:22:01.316069067 +0000 UTC m=+0.150814375 container died d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:22:01 compute-0 systemd[1]: var-lib-containers-storage-overlay-688906c9646a1d69c8b74cd24b9eb4c27e0838b0c9b2cd44cab78828a4bf1f70-merged.mount: Deactivated successfully.
Jan 20 19:22:01 compute-0 podman[240715]: 2026-01-20 19:22:01.355517495 +0000 UTC m=+0.190262803 container remove d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=reverent_kare, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 19:22:01 compute-0 systemd[1]: libpod-conmon-d6f282ef2e1ef2eb1dcee3971099b5f4b006507dbc9ff6b28fbe1f956e0251f7.scope: Deactivated successfully.
Jan 20 19:22:01 compute-0 podman[240755]: 2026-01-20 19:22:01.534019094 +0000 UTC m=+0.055012133 container create c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:22:01 compute-0 systemd[1]: Started libpod-conmon-c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c.scope.
Jan 20 19:22:01 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:01 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:01 compute-0 podman[240755]: 2026-01-20 19:22:01.607670434 +0000 UTC m=+0.128663473 container init c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:22:01 compute-0 podman[240755]: 2026-01-20 19:22:01.517007745 +0000 UTC m=+0.038000814 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:22:01 compute-0 podman[240755]: 2026-01-20 19:22:01.616172259 +0000 UTC m=+0.137165298 container start c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:22:01 compute-0 podman[240755]: 2026-01-20 19:22:01.619712204 +0000 UTC m=+0.140705263 container attach c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, OSD_FLAVOR=default, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:22:02 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:02 compute-0 clever_kalam[240772]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:22:02 compute-0 clever_kalam[240772]: --> All data devices are unavailable
Jan 20 19:22:02 compute-0 systemd[1]: libpod-c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c.scope: Deactivated successfully.
Jan 20 19:22:02 compute-0 podman[240792]: 2026-01-20 19:22:02.174531297 +0000 UTC m=+0.024461759 container died c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Jan 20 19:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-0b79197827d0da07df3342db1d0da9452232a11f58c795973e2402f89faea5aa-merged.mount: Deactivated successfully.
Jan 20 19:22:02 compute-0 podman[240792]: 2026-01-20 19:22:02.218508493 +0000 UTC m=+0.068438905 container remove c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_kalam, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:22:02 compute-0 systemd[1]: libpod-conmon-c1b36958887bdae79a07525eecca9c742637c438f637fda9fabb7ae516dd8c9c.scope: Deactivated successfully.
Jan 20 19:22:02 compute-0 sudo[240678]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:02 compute-0 sudo[240807]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:22:02 compute-0 sudo[240807]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:02 compute-0 sudo[240807]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:02 compute-0 sudo[240832]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:22:02 compute-0 sudo[240832]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:02 compute-0 podman[240868]: 2026-01-20 19:22:02.665444764 +0000 UTC m=+0.041375195 container create cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:22:02 compute-0 systemd[1]: Started libpod-conmon-cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5.scope.
Jan 20 19:22:02 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:02 compute-0 podman[240868]: 2026-01-20 19:22:02.740831385 +0000 UTC m=+0.116761836 container init cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:22:02 compute-0 podman[240868]: 2026-01-20 19:22:02.64823087 +0000 UTC m=+0.024161321 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:22:02 compute-0 podman[240868]: 2026-01-20 19:22:02.747607078 +0000 UTC m=+0.123537509 container start cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:22:02 compute-0 podman[240868]: 2026-01-20 19:22:02.750665252 +0000 UTC m=+0.126595683 container attach cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:22:02 compute-0 objective_merkle[240884]: 167 167
Jan 20 19:22:02 compute-0 systemd[1]: libpod-cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5.scope: Deactivated successfully.
Jan 20 19:22:02 compute-0 podman[240868]: 2026-01-20 19:22:02.753340456 +0000 UTC m=+0.129270887 container died cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default)
Jan 20 19:22:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-959f2de8dfe2788e51058357ea3f84954a4ea73ea6c7882d2efb0486767ca7ad-merged.mount: Deactivated successfully.
Jan 20 19:22:02 compute-0 podman[240868]: 2026-01-20 19:22:02.791479972 +0000 UTC m=+0.167410403 container remove cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=objective_merkle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:22:02 compute-0 systemd[1]: libpod-conmon-cfbee76e46194bee5fc3f0f6d2f679e591b4fc0cc425a345ba8739aa19ad0ec5.scope: Deactivated successfully.
Jan 20 19:22:02 compute-0 podman[240908]: 2026-01-20 19:22:02.940551775 +0000 UTC m=+0.035772291 container create 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:22:02 compute-0 systemd[1]: Started libpod-conmon-5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9.scope.
Jan 20 19:22:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:03 compute-0 podman[240908]: 2026-01-20 19:22:03.013800415 +0000 UTC m=+0.109021011 container init 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:22:03 compute-0 podman[240908]: 2026-01-20 19:22:02.925328579 +0000 UTC m=+0.020549125 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:22:03 compute-0 podman[240908]: 2026-01-20 19:22:03.021655774 +0000 UTC m=+0.116876290 container start 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:22:03 compute-0 podman[240908]: 2026-01-20 19:22:03.025573749 +0000 UTC m=+0.120794315 container attach 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Jan 20 19:22:03 compute-0 ceph-mon[75120]: pgmap v698: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]: {
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:     "0": [
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:         {
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "devices": [
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "/dev/loop3"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             ],
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_name": "ceph_lv0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_size": "21470642176",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "name": "ceph_lv0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "tags": {
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cluster_name": "ceph",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.crush_device_class": "",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.encrypted": "0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.objectstore": "bluestore",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osd_id": "0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.type": "block",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.vdo": "0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.with_tpm": "0"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             },
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "type": "block",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "vg_name": "ceph_vg0"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:         }
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:     ],
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:     "1": [
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:         {
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "devices": [
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "/dev/loop4"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             ],
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_name": "ceph_lv1",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_size": "21470642176",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "name": "ceph_lv1",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "tags": {
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cluster_name": "ceph",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.crush_device_class": "",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.encrypted": "0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.objectstore": "bluestore",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osd_id": "1",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.type": "block",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.vdo": "0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.with_tpm": "0"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             },
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "type": "block",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "vg_name": "ceph_vg1"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:         }
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:     ],
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:     "2": [
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:         {
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "devices": [
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "/dev/loop5"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             ],
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_name": "ceph_lv2",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_size": "21470642176",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "name": "ceph_lv2",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "tags": {
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.cluster_name": "ceph",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.crush_device_class": "",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.encrypted": "0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.objectstore": "bluestore",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osd_id": "2",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.type": "block",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.vdo": "0",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:                 "ceph.with_tpm": "0"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             },
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "type": "block",
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:             "vg_name": "ceph_vg2"
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:         }
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]:     ]
Jan 20 19:22:03 compute-0 inspiring_goldwasser[240924]: }
Jan 20 19:22:03 compute-0 systemd[1]: libpod-5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9.scope: Deactivated successfully.
Jan 20 19:22:03 compute-0 podman[240908]: 2026-01-20 19:22:03.308214811 +0000 UTC m=+0.403435337 container died 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Jan 20 19:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-b60a507efbe3981d928cb319e1b76e16c20c4dac94f72011bea47264dec7a774-merged.mount: Deactivated successfully.
Jan 20 19:22:03 compute-0 podman[240908]: 2026-01-20 19:22:03.349675587 +0000 UTC m=+0.444896113 container remove 5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:22:03 compute-0 systemd[1]: libpod-conmon-5e0cb2ea7bb4f5d6e993efcbacbffa5d16a26229bc28c9a9f21d212f19fbcfd9.scope: Deactivated successfully.
Jan 20 19:22:03 compute-0 sudo[240832]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:03 compute-0 sudo[240948]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:22:03 compute-0 sudo[240948]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:03 compute-0 sudo[240948]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:03 compute-0 sudo[240973]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:22:03 compute-0 sudo[240973]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:03 compute-0 podman[241009]: 2026-01-20 19:22:03.803582015 +0000 UTC m=+0.037252646 container create 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 19:22:03 compute-0 systemd[1]: Started libpod-conmon-2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb.scope.
Jan 20 19:22:03 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:03 compute-0 podman[241009]: 2026-01-20 19:22:03.86789181 +0000 UTC m=+0.101562461 container init 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 19:22:03 compute-0 podman[241009]: 2026-01-20 19:22:03.874475329 +0000 UTC m=+0.108145960 container start 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Jan 20 19:22:03 compute-0 podman[241009]: 2026-01-20 19:22:03.877848409 +0000 UTC m=+0.111519100 container attach 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:22:03 compute-0 inspiring_pasteur[241026]: 167 167
Jan 20 19:22:03 compute-0 systemd[1]: libpod-2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb.scope: Deactivated successfully.
Jan 20 19:22:03 compute-0 podman[241009]: 2026-01-20 19:22:03.879792286 +0000 UTC m=+0.113462907 container died 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:22:03 compute-0 podman[241009]: 2026-01-20 19:22:03.788069092 +0000 UTC m=+0.021739723 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:22:03 compute-0 systemd[1]: var-lib-containers-storage-overlay-6bb54d631088a87b387c2598d51d53dacece8d8ea03c060b4fbd9c5e884bf4c6-merged.mount: Deactivated successfully.
Jan 20 19:22:03 compute-0 podman[241009]: 2026-01-20 19:22:03.914592513 +0000 UTC m=+0.148263144 container remove 2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=inspiring_pasteur, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:22:03 compute-0 systemd[1]: libpod-conmon-2b762c2c0edf78d4636f8d1d7e9d8a8eed04684fa87c110fe5f0fafe7f1b1cfb.scope: Deactivated successfully.
Jan 20 19:22:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:04 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:04 compute-0 podman[241048]: 2026-01-20 19:22:04.076534114 +0000 UTC m=+0.027912522 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:22:04 compute-0 podman[241048]: 2026-01-20 19:22:04.193101966 +0000 UTC m=+0.144480354 container create 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True)
Jan 20 19:22:04 compute-0 systemd[1]: Started libpod-conmon-1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3.scope.
Jan 20 19:22:04 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:04 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:22:04 compute-0 podman[241048]: 2026-01-20 19:22:04.287994965 +0000 UTC m=+0.239373383 container init 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:22:04 compute-0 podman[241048]: 2026-01-20 19:22:04.295875065 +0000 UTC m=+0.247253473 container start 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:22:04 compute-0 podman[241048]: 2026-01-20 19:22:04.299377029 +0000 UTC m=+0.250755437 container attach 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:22:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:04 compute-0 lvm[241143]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:22:04 compute-0 lvm[241144]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:22:04 compute-0 lvm[241144]: VG ceph_vg1 finished
Jan 20 19:22:04 compute-0 lvm[241143]: VG ceph_vg0 finished
Jan 20 19:22:04 compute-0 lvm[241146]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:22:04 compute-0 lvm[241146]: VG ceph_vg2 finished
Jan 20 19:22:05 compute-0 practical_volhard[241065]: {}
Jan 20 19:22:05 compute-0 systemd[1]: libpod-1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3.scope: Deactivated successfully.
Jan 20 19:22:05 compute-0 systemd[1]: libpod-1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3.scope: Consumed 1.304s CPU time.
Jan 20 19:22:05 compute-0 podman[241048]: 2026-01-20 19:22:05.072913758 +0000 UTC m=+1.024292146 container died 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:22:05 compute-0 systemd[1]: var-lib-containers-storage-overlay-0760e29fe017c935b34348ce07a462e14f09a3dd651ef011c1853bb6803e5484-merged.mount: Deactivated successfully.
Jan 20 19:22:05 compute-0 podman[241048]: 2026-01-20 19:22:05.200256248 +0000 UTC m=+1.151634636 container remove 1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=practical_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True)
Jan 20 19:22:05 compute-0 systemd[1]: libpod-conmon-1858b3cb639c0520483f0209bc8e75fc39c8f8acd2da8c263dc82f798379bfb3.scope: Deactivated successfully.
Jan 20 19:22:05 compute-0 ceph-mon[75120]: pgmap v699: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:05 compute-0 sudo[240973]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:22:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:22:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:22:05 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:22:05 compute-0 sudo[241162]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:22:05 compute-0 sudo[241162]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:22:05 compute-0 sudo[241162]: pam_unix(sudo:session): session closed for user root
Jan 20 19:22:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:22:05.447 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:22:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:22:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:22:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:22:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:22:06 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:22:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:22:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:22:07 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 3404 writes, 15K keys, 3404 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 3403 writes, 3403 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 1309 writes, 5927 keys, 1309 commit groups, 1.0 writes per commit group, ingest: 8.70 MB, 0.01 MB/s
                                           Interval WAL: 1308 writes, 1308 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    103.8      0.15              0.04         7    0.022       0      0       0.0       0.0
                                             L6      1/0    7.01 MB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   2.7    119.7     99.0      0.43              0.13         6    0.072     24K   3195       0.0       0.0
                                            Sum      1/0    7.01 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.7     88.3    100.3      0.59              0.17        13    0.045     24K   3195       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.0     95.5     95.7      0.37              0.10         8    0.046     17K   2463       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.0      0.0       0.0   0.0    119.7     99.0      0.43              0.13         6    0.072     24K   3195       0.0       0.0
                                           High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    129.1      0.12              0.04         6    0.021       0      0       0.0       0.0
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      1.9      0.03              0.00         1    0.031       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.016, interval 0.007
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.6 seconds
                                           Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x55eae3cfb8d0#2 capacity: 308.00 MB usage: 1.82 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 5.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(106,1.60 MB,0.520032%) FilterBlock(14,74.73 KB,0.0236957%) IndexBlock(14,152.67 KB,0.048407%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
Jan 20 19:22:07 compute-0 ceph-mon[75120]: pgmap v700: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:08 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:08 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:22:08.687 154796 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'f6:2e:45', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '46:02:c4:e7:e3:a1'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 20 19:22:08 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:22:08.688 154796 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 20 19:22:08 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:22:08.689 154796 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=15f2b046-37e6-488b-9e52-3d187c798598, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 20 19:22:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:09 compute-0 ceph-mon[75120]: pgmap v701: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:10 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:11 compute-0 ceph-mon[75120]: pgmap v702: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:11 compute-0 podman[241187]: 2026-01-20 19:22:11.42299082 +0000 UTC m=+0.088438477 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 20 19:22:12 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:13 compute-0 ceph-mon[75120]: pgmap v703: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:13 compute-0 podman[241213]: 2026-01-20 19:22:13.386950416 +0000 UTC m=+0.050705769 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 20 19:22:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:14 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:15 compute-0 ceph-mon[75120]: pgmap v704: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:16 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:17 compute-0 ceph-mon[75120]: pgmap v705: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:18 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:19 compute-0 ceph-mon[75120]: pgmap v706: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:20 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:21 compute-0 ceph-mon[75120]: pgmap v707: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:22 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:23 compute-0 ceph-mon[75120]: pgmap v708: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:24 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:25 compute-0 ceph-mon[75120]: pgmap v709: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:26 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:27 compute-0 ceph-mon[75120]: pgmap v710: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:28 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:29 compute-0 ceph-mon[75120]: pgmap v711: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:30 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:31 compute-0 ceph-mon[75120]: pgmap v712: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:22:31
Jan 20 19:22:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:22:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:22:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'vms', 'images', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', 'default.rgw.log']
Jan 20 19:22:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:22:32 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:33 compute-0 ceph-mon[75120]: pgmap v713: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:22:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:22:35 compute-0 ceph-mon[75120]: pgmap v714: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:36 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:37 compute-0 ceph-mon[75120]: pgmap v715: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:38 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:39 compute-0 ceph-mon[75120]: pgmap v716: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:40 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:41 compute-0 ceph-mon[75120]: pgmap v717: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:42 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:42 compute-0 podman[241232]: 2026-01-20 19:22:42.461285844 +0000 UTC m=+0.128594852 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 19:22:43 compute-0 ceph-mon[75120]: pgmap v718: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:44 compute-0 podman[241258]: 2026-01-20 19:22:44.380191867 +0000 UTC m=+0.051611792 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:22:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:22:45 compute-0 ceph-mon[75120]: pgmap v719: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:46 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:47 compute-0 ceph-mon[75120]: pgmap v720: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:48 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:22:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3724689865' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:22:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:22:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3724689865' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:22:49 compute-0 ceph-mon[75120]: pgmap v721: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:49 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/3724689865' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:22:49 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/3724689865' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:22:50 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:51 compute-0 ceph-mon[75120]: pgmap v722: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:52 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:53 compute-0 ceph-mon[75120]: pgmap v723: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:54 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:55 compute-0 ceph-mon[75120]: pgmap v724: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:56 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:56 compute-0 nova_compute[239038]: 2026-01-20 19:22:56.061 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:56 compute-0 nova_compute[239038]: 2026-01-20 19:22:56.061 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:56 compute-0 nova_compute[239038]: 2026-01-20 19:22:56.062 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:56 compute-0 nova_compute[239038]: 2026-01-20 19:22:56.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:56 compute-0 nova_compute[239038]: 2026-01-20 19:22:56.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:22:56 compute-0 nova_compute[239038]: 2026-01-20 19:22:56.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:22:56 compute-0 nova_compute[239038]: 2026-01-20 19:22:56.700 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:22:57 compute-0 ceph-mon[75120]: pgmap v725: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:57 compute-0 nova_compute[239038]: 2026-01-20 19:22:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:57 compute-0 nova_compute[239038]: 2026-01-20 19:22:57.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:57 compute-0 nova_compute[239038]: 2026-01-20 19:22:57.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:57 compute-0 nova_compute[239038]: 2026-01-20 19:22:57.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:22:58 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:58 compute-0 nova_compute[239038]: 2026-01-20 19:22:58.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:58 compute-0 nova_compute[239038]: 2026-01-20 19:22:58.684 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:22:58 compute-0 nova_compute[239038]: 2026-01-20 19:22:58.708 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:22:58 compute-0 nova_compute[239038]: 2026-01-20 19:22:58.709 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:22:58 compute-0 nova_compute[239038]: 2026-01-20 19:22:58.709 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:22:58 compute-0 nova_compute[239038]: 2026-01-20 19:22:58.709 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:22:58 compute-0 nova_compute[239038]: 2026-01-20 19:22:58.710 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:22:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:22:59 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:22:59 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/93760089' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:22:59 compute-0 nova_compute[239038]: 2026-01-20 19:22:59.248 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:22:59 compute-0 nova_compute[239038]: 2026-01-20 19:22:59.427 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:22:59 compute-0 nova_compute[239038]: 2026-01-20 19:22:59.428 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5165MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:22:59 compute-0 nova_compute[239038]: 2026-01-20 19:22:59.429 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:22:59 compute-0 nova_compute[239038]: 2026-01-20 19:22:59.429 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:22:59 compute-0 nova_compute[239038]: 2026-01-20 19:22:59.499 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:22:59 compute-0 nova_compute[239038]: 2026-01-20 19:22:59.499 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:22:59 compute-0 ceph-mon[75120]: pgmap v726: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:22:59 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/93760089' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:22:59 compute-0 nova_compute[239038]: 2026-01-20 19:22:59.527 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:23:00 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:23:00 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3295232245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:23:00 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:00 compute-0 nova_compute[239038]: 2026-01-20 19:23:00.062 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:23:00 compute-0 nova_compute[239038]: 2026-01-20 19:23:00.067 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:23:00 compute-0 nova_compute[239038]: 2026-01-20 19:23:00.096 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:23:00 compute-0 nova_compute[239038]: 2026-01-20 19:23:00.097 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:23:00 compute-0 nova_compute[239038]: 2026-01-20 19:23:00.097 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:23:00 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3295232245' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:23:01 compute-0 ceph-mon[75120]: pgmap v727: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:02 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:02 compute-0 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000127 took=0.000074s
Jan 20 19:23:02 compute-0 ceph-osd[88112]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000036s
Jan 20 19:23:02 compute-0 ceph-osd[86022]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000047s
Jan 20 19:23:03 compute-0 ceph-mon[75120]: pgmap v728: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:04 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:05 compute-0 sudo[241322]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:23:05 compute-0 sudo[241322]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:05 compute-0 sudo[241322]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:23:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:23:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:23:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:23:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:23:05.449 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:23:05 compute-0 sudo[241347]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:23:05 compute-0 sudo[241347]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:05 compute-0 ceph-mon[75120]: pgmap v729: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:06 compute-0 sudo[241347]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:23:06 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:23:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:23:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:23:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:23:06 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:23:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:23:06 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:23:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:23:06 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:23:06 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:23:06 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:23:06 compute-0 sudo[241403]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:23:06 compute-0 sudo[241403]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:06 compute-0 sudo[241403]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:06 compute-0 sudo[241428]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:23:06 compute-0 sudo[241428]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:06 compute-0 podman[241465]: 2026-01-20 19:23:06.452812146 +0000 UTC m=+0.037626798 container create a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:23:06 compute-0 systemd[1]: Started libpod-conmon-a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de.scope.
Jan 20 19:23:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:23:06 compute-0 podman[241465]: 2026-01-20 19:23:06.436953464 +0000 UTC m=+0.021768136 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:23:06 compute-0 podman[241465]: 2026-01-20 19:23:06.537174288 +0000 UTC m=+0.121988960 container init a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:23:06 compute-0 podman[241465]: 2026-01-20 19:23:06.544644347 +0000 UTC m=+0.129458999 container start a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:23:06 compute-0 podman[241465]: 2026-01-20 19:23:06.548428319 +0000 UTC m=+0.133243031 container attach a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 19:23:06 compute-0 exciting_antonelli[241481]: 167 167
Jan 20 19:23:06 compute-0 systemd[1]: libpod-a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de.scope: Deactivated successfully.
Jan 20 19:23:06 compute-0 conmon[241481]: conmon a681ddd233932eee3f27 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de.scope/container/memory.events
Jan 20 19:23:06 compute-0 podman[241465]: 2026-01-20 19:23:06.552943237 +0000 UTC m=+0.137757889 container died a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:23:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:23:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:23:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:23:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:23:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:23:06 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:23:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-c2af17693ced31daa9b7db54034f21e61678271d6a4fd0bc056e1c43e7495e4e-merged.mount: Deactivated successfully.
Jan 20 19:23:06 compute-0 podman[241465]: 2026-01-20 19:23:06.59125554 +0000 UTC m=+0.176070192 container remove a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=exciting_antonelli, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:23:06 compute-0 systemd[1]: libpod-conmon-a681ddd233932eee3f2794cc912d8ac2703c5eed0eaa62f080a0504ae2ca16de.scope: Deactivated successfully.
Jan 20 19:23:06 compute-0 podman[241505]: 2026-01-20 19:23:06.779469722 +0000 UTC m=+0.045608999 container create c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:23:06 compute-0 systemd[1]: Started libpod-conmon-c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb.scope.
Jan 20 19:23:06 compute-0 podman[241505]: 2026-01-20 19:23:06.761321455 +0000 UTC m=+0.027460752 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:23:06 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:06 compute-0 podman[241505]: 2026-01-20 19:23:06.879261575 +0000 UTC m=+0.145400872 container init c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:23:06 compute-0 podman[241505]: 2026-01-20 19:23:06.886726855 +0000 UTC m=+0.152866132 container start c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:23:06 compute-0 podman[241505]: 2026-01-20 19:23:06.895520746 +0000 UTC m=+0.161660023 container attach c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:23:07 compute-0 busy_ritchie[241521]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:23:07 compute-0 busy_ritchie[241521]: --> All data devices are unavailable
Jan 20 19:23:07 compute-0 systemd[1]: libpod-c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb.scope: Deactivated successfully.
Jan 20 19:23:07 compute-0 podman[241505]: 2026-01-20 19:23:07.411636355 +0000 UTC m=+0.677775632 container died c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:23:07 compute-0 systemd[1]: var-lib-containers-storage-overlay-25a438907097335f4d3ba851b07d4de4cfd9329a6023e04cec898e2a70fe5e79-merged.mount: Deactivated successfully.
Jan 20 19:23:07 compute-0 podman[241505]: 2026-01-20 19:23:07.562324064 +0000 UTC m=+0.828463341 container remove c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=busy_ritchie, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:23:07 compute-0 ceph-mon[75120]: pgmap v730: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:07 compute-0 systemd[1]: libpod-conmon-c5b7c5b1fb942ee8e0bc12f52ab9e3362ff16e54cafee2fbfbb30f4b682d6adb.scope: Deactivated successfully.
Jan 20 19:23:07 compute-0 sudo[241428]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:07 compute-0 sudo[241557]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:23:07 compute-0 sudo[241557]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:07 compute-0 sudo[241557]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:07 compute-0 sudo[241582]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:23:07 compute-0 sudo[241582]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:08 compute-0 podman[241619]: 2026-01-20 19:23:08.017803362 +0000 UTC m=+0.042067154 container create bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:23:08 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:08 compute-0 systemd[1]: Started libpod-conmon-bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3.scope.
Jan 20 19:23:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:23:08 compute-0 podman[241619]: 2026-01-20 19:23:08.087380228 +0000 UTC m=+0.111644050 container init bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:23:08 compute-0 podman[241619]: 2026-01-20 19:23:08.092707106 +0000 UTC m=+0.116970898 container start bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Jan 20 19:23:08 compute-0 podman[241619]: 2026-01-20 19:23:07.998736852 +0000 UTC m=+0.023000664 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:23:08 compute-0 podman[241619]: 2026-01-20 19:23:08.09621384 +0000 UTC m=+0.120477622 container attach bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:23:08 compute-0 epic_mclean[241635]: 167 167
Jan 20 19:23:08 compute-0 systemd[1]: libpod-bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3.scope: Deactivated successfully.
Jan 20 19:23:08 compute-0 podman[241619]: 2026-01-20 19:23:08.098154076 +0000 UTC m=+0.122417878 container died bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:23:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-a271f9a005fa7fbfe099ab564f71804379ae6d39e86f0341d9e962f855b15f88-merged.mount: Deactivated successfully.
Jan 20 19:23:08 compute-0 podman[241619]: 2026-01-20 19:23:08.150985919 +0000 UTC m=+0.175249711 container remove bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=epic_mclean, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Jan 20 19:23:08 compute-0 systemd[1]: libpod-conmon-bee38c26f8d84c01672f77ecd218fb1a43bfe034873875ff464b8d8e58421db3.scope: Deactivated successfully.
Jan 20 19:23:08 compute-0 podman[241659]: 2026-01-20 19:23:08.294895885 +0000 UTC m=+0.036837298 container create a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:23:08 compute-0 systemd[1]: Started libpod-conmon-a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3.scope.
Jan 20 19:23:08 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:08 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:08 compute-0 podman[241659]: 2026-01-20 19:23:08.279906923 +0000 UTC m=+0.021848336 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:23:08 compute-0 podman[241659]: 2026-01-20 19:23:08.386156752 +0000 UTC m=+0.128098175 container init a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:23:08 compute-0 podman[241659]: 2026-01-20 19:23:08.397093486 +0000 UTC m=+0.139034899 container start a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:23:08 compute-0 podman[241659]: 2026-01-20 19:23:08.399901043 +0000 UTC m=+0.141842536 container attach a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]: {
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:     "0": [
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:         {
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "devices": [
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "/dev/loop3"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             ],
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_name": "ceph_lv0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_size": "21470642176",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "name": "ceph_lv0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "tags": {
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cluster_name": "ceph",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.crush_device_class": "",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.encrypted": "0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.objectstore": "bluestore",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osd_id": "0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.type": "block",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.vdo": "0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.with_tpm": "0"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             },
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "type": "block",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "vg_name": "ceph_vg0"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:         }
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:     ],
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:     "1": [
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:         {
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "devices": [
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "/dev/loop4"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             ],
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_name": "ceph_lv1",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_size": "21470642176",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "name": "ceph_lv1",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "tags": {
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cluster_name": "ceph",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.crush_device_class": "",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.encrypted": "0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.objectstore": "bluestore",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osd_id": "1",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.type": "block",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.vdo": "0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.with_tpm": "0"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             },
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "type": "block",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "vg_name": "ceph_vg1"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:         }
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:     ],
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:     "2": [
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:         {
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "devices": [
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "/dev/loop5"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             ],
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_name": "ceph_lv2",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_size": "21470642176",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "name": "ceph_lv2",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "tags": {
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.cluster_name": "ceph",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.crush_device_class": "",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.encrypted": "0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.objectstore": "bluestore",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osd_id": "2",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.type": "block",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.vdo": "0",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:                 "ceph.with_tpm": "0"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             },
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "type": "block",
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:             "vg_name": "ceph_vg2"
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:         }
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]:     ]
Jan 20 19:23:08 compute-0 hungry_hofstadter[241675]: }
Jan 20 19:23:08 compute-0 systemd[1]: libpod-a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3.scope: Deactivated successfully.
Jan 20 19:23:08 compute-0 podman[241659]: 2026-01-20 19:23:08.681088075 +0000 UTC m=+0.423029488 container died a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:23:08 compute-0 systemd[1]: var-lib-containers-storage-overlay-18a2a3c60376f280502c686a2ef634ff32c6d4cc8420aa8d840be98ac5262d2d-merged.mount: Deactivated successfully.
Jan 20 19:23:08 compute-0 podman[241659]: 2026-01-20 19:23:08.721451007 +0000 UTC m=+0.463392420 container remove a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=hungry_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:23:08 compute-0 systemd[1]: libpod-conmon-a67f4a2404d33056edf7c865813a336a8d1c724e8765730e36c28c65aaa295d3.scope: Deactivated successfully.
Jan 20 19:23:08 compute-0 sudo[241582]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:08 compute-0 sudo[241696]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:23:08 compute-0 sudo[241696]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:08 compute-0 sudo[241696]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:08 compute-0 sudo[241721]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:23:08 compute-0 sudo[241721]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:09 compute-0 podman[241759]: 2026-01-20 19:23:09.134602556 +0000 UTC m=+0.035496797 container create 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:23:09 compute-0 systemd[1]: Started libpod-conmon-52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d.scope.
Jan 20 19:23:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:23:09 compute-0 podman[241759]: 2026-01-20 19:23:09.208186117 +0000 UTC m=+0.109080378 container init 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:23:09 compute-0 podman[241759]: 2026-01-20 19:23:09.119036141 +0000 UTC m=+0.019930402 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:23:09 compute-0 podman[241759]: 2026-01-20 19:23:09.214774916 +0000 UTC m=+0.115669157 container start 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 19:23:09 compute-0 podman[241759]: 2026-01-20 19:23:09.217983753 +0000 UTC m=+0.118878024 container attach 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:23:09 compute-0 angry_sanderson[241775]: 167 167
Jan 20 19:23:09 compute-0 systemd[1]: libpod-52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d.scope: Deactivated successfully.
Jan 20 19:23:09 compute-0 podman[241759]: 2026-01-20 19:23:09.219409267 +0000 UTC m=+0.120303528 container died 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:23:09 compute-0 systemd[1]: var-lib-containers-storage-overlay-1041fb1858d3b0bc284f5ca155323af21c7505e692b2ad40879ecd05c84fcab8-merged.mount: Deactivated successfully.
Jan 20 19:23:09 compute-0 podman[241759]: 2026-01-20 19:23:09.253455708 +0000 UTC m=+0.154349949 container remove 52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=angry_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:23:09 compute-0 systemd[1]: libpod-conmon-52d2efa939713b620cdc1c2672c8e8ac17f4c27fe9c5a6c9f42e93b7feb19a9d.scope: Deactivated successfully.
Jan 20 19:23:09 compute-0 podman[241799]: 2026-01-20 19:23:09.404853193 +0000 UTC m=+0.039312257 container create 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Jan 20 19:23:09 compute-0 systemd[1]: Started libpod-conmon-4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135.scope.
Jan 20 19:23:09 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:23:09 compute-0 podman[241799]: 2026-01-20 19:23:09.480604698 +0000 UTC m=+0.115063782 container init 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:23:09 compute-0 podman[241799]: 2026-01-20 19:23:09.386127362 +0000 UTC m=+0.020586426 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:23:09 compute-0 podman[241799]: 2026-01-20 19:23:09.492488013 +0000 UTC m=+0.126947067 container start 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=tentacle, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:23:09 compute-0 podman[241799]: 2026-01-20 19:23:09.495716922 +0000 UTC m=+0.130175976 container attach 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:23:09 compute-0 ceph-mon[75120]: pgmap v731: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:10 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:10 compute-0 lvm[241893]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:23:10 compute-0 lvm[241893]: VG ceph_vg0 finished
Jan 20 19:23:10 compute-0 lvm[241894]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:23:10 compute-0 lvm[241894]: VG ceph_vg1 finished
Jan 20 19:23:10 compute-0 lvm[241896]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:23:10 compute-0 lvm[241896]: VG ceph_vg2 finished
Jan 20 19:23:10 compute-0 sleepy_babbage[241815]: {}
Jan 20 19:23:10 compute-0 systemd[1]: libpod-4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135.scope: Deactivated successfully.
Jan 20 19:23:10 compute-0 podman[241799]: 2026-01-20 19:23:10.357673547 +0000 UTC m=+0.992132621 container died 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Jan 20 19:23:10 compute-0 systemd[1]: libpod-4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135.scope: Consumed 1.302s CPU time.
Jan 20 19:23:10 compute-0 systemd[1]: var-lib-containers-storage-overlay-7fd1ef413a5f558fe326c1b95bfc2878d14fba8dc59ccddbe160851aa1f905b8-merged.mount: Deactivated successfully.
Jan 20 19:23:10 compute-0 podman[241799]: 2026-01-20 19:23:10.396472432 +0000 UTC m=+1.030931486 container remove 4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sleepy_babbage, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS)
Jan 20 19:23:10 compute-0 systemd[1]: libpod-conmon-4fa342370bd2333c70000daa5665b1494e00081d9177f490a8efec7c7564c135.scope: Deactivated successfully.
Jan 20 19:23:10 compute-0 sudo[241721]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:23:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:23:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:23:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:23:10 compute-0 sudo[241911]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:23:10 compute-0 sudo[241911]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:23:10 compute-0 sudo[241911]: pam_unix(sudo:session): session closed for user root
Jan 20 19:23:11 compute-0 ceph-mon[75120]: pgmap v732: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:23:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:23:12 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:13 compute-0 podman[241936]: 2026-01-20 19:23:13.428580517 +0000 UTC m=+0.098159775 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 20 19:23:13 compute-0 ceph-mon[75120]: pgmap v733: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:14 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:15 compute-0 podman[241962]: 2026-01-20 19:23:15.37795847 +0000 UTC m=+0.056056691 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 20 19:23:15 compute-0 ceph-mon[75120]: pgmap v734: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:16 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:17 compute-0 ceph-mon[75120]: pgmap v735: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:18 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:18 compute-0 sshd-session[241981]: Invalid user ubuntu from 45.148.10.240 port 38116
Jan 20 19:23:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:19 compute-0 sshd-session[241981]: Connection closed by invalid user ubuntu 45.148.10.240 port 38116 [preauth]
Jan 20 19:23:19 compute-0 ceph-mon[75120]: pgmap v736: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:20 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:21 compute-0 ceph-mon[75120]: pgmap v737: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:22 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:23 compute-0 ceph-mon[75120]: pgmap v738: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:24 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:25 compute-0 ceph-mon[75120]: pgmap v739: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:26 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:27 compute-0 ceph-mon[75120]: pgmap v740: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:28 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:29 compute-0 ceph-mon[75120]: pgmap v741: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:30 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:31 compute-0 ceph-mon[75120]: pgmap v742: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:23:31 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5863 writes, 24K keys, 5863 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5863 writes, 1003 syncs, 5.85 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:23:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:23:31
Jan 20 19:23:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:23:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:23:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'vms', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.data']
Jan 20 19:23:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:23:32 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:33 compute-0 ceph-mon[75120]: pgmap v743: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:23:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:23:35 compute-0 ceph-mon[75120]: pgmap v744: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:23:35 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7128 writes, 29K keys, 7128 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7128 writes, 1427 syncs, 5.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:23:36 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:37 compute-0 ceph-mon[75120]: pgmap v745: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:38 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:39 compute-0 ceph-mon[75120]: pgmap v746: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:40 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:41 compute-0 ceph-mon[75120]: pgmap v747: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:42 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:23:42 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5637 writes, 24K keys, 5637 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5637 writes, 873 syncs, 6.46 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:23:43 compute-0 ceph-mon[75120]: pgmap v748: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:44 compute-0 podman[241983]: 2026-01-20 19:23:44.409632193 +0000 UTC m=+0.088038051 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:23:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:23:45 compute-0 ceph-mon[75120]: pgmap v749: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:46 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:46 compute-0 podman[242009]: 2026-01-20 19:23:46.366477283 +0000 UTC m=+0.046219264 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 20 19:23:46 compute-0 ceph-mgr[75417]: [devicehealth INFO root] Check health
Jan 20 19:23:47 compute-0 ceph-mon[75120]: pgmap v750: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:48 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:23:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573394814' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:23:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:23:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2573394814' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:23:49 compute-0 ceph-mon[75120]: pgmap v751: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:49 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/2573394814' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:23:49 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/2573394814' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:23:50 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.600261) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030600314, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1500, "num_deletes": 251, "total_data_size": 2411145, "memory_usage": 2456784, "flush_reason": "Manual Compaction"}
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030615632, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2377337, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14863, "largest_seqno": 16362, "table_properties": {"data_size": 2370364, "index_size": 4044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14221, "raw_average_key_size": 19, "raw_value_size": 2356428, "raw_average_value_size": 3259, "num_data_blocks": 185, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768936869, "oldest_key_time": 1768936869, "file_creation_time": 1768937030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 15424 microseconds, and 5453 cpu microseconds.
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.615692) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2377337 bytes OK
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.615710) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.617408) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.617435) EVENT_LOG_v1 {"time_micros": 1768937030617431, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.617452) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2404594, prev total WAL file size 2404594, number of live WAL files 2.
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.618092) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2321KB)], [35(7181KB)]
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030618154, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9731233, "oldest_snapshot_seqno": -1}
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4032 keys, 7908738 bytes, temperature: kUnknown
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030683784, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7908738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7879686, "index_size": 17870, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 98440, "raw_average_key_size": 24, "raw_value_size": 7804631, "raw_average_value_size": 1935, "num_data_blocks": 755, "num_entries": 4032, "num_filter_entries": 4032, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768937030, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.684519) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7908738 bytes
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.685792) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.1 rd, 119.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.3, 7.0 +0.0 blob) out(7.5 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4546, records dropped: 514 output_compression: NoCompression
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.685806) EVENT_LOG_v1 {"time_micros": 1768937030685799, "job": 16, "event": "compaction_finished", "compaction_time_micros": 66155, "compaction_time_cpu_micros": 17296, "output_level": 6, "num_output_files": 1, "total_output_size": 7908738, "num_input_records": 4546, "num_output_records": 4032, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030686255, "job": 16, "event": "table_file_deletion", "file_number": 37}
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937030687563, "job": 16, "event": "table_file_deletion", "file_number": 35}
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.618029) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687680) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:23:50 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:23:50.687686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:23:51 compute-0 ceph-mon[75120]: pgmap v752: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:52 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:53 compute-0 ceph-mon[75120]: pgmap v753: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:54 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:55 compute-0 ceph-mon[75120]: pgmap v754: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:56 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:57 compute-0 nova_compute[239038]: 2026-01-20 19:23:57.091 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:57 compute-0 nova_compute[239038]: 2026-01-20 19:23:57.115 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:57 compute-0 ceph-mon[75120]: pgmap v755: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:57 compute-0 nova_compute[239038]: 2026-01-20 19:23:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:57 compute-0 nova_compute[239038]: 2026-01-20 19:23:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:57 compute-0 nova_compute[239038]: 2026-01-20 19:23:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:57 compute-0 nova_compute[239038]: 2026-01-20 19:23:57.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:58 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:23:58 compute-0 nova_compute[239038]: 2026-01-20 19:23:58.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:58 compute-0 nova_compute[239038]: 2026-01-20 19:23:58.684 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:23:58 compute-0 nova_compute[239038]: 2026-01-20 19:23:58.684 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:23:58 compute-0 nova_compute[239038]: 2026-01-20 19:23:58.700 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:23:58 compute-0 nova_compute[239038]: 2026-01-20 19:23:58.701 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:58 compute-0 nova_compute[239038]: 2026-01-20 19:23:58.701 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:23:58 compute-0 nova_compute[239038]: 2026-01-20 19:23:58.702 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:23:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:23:59 compute-0 ceph-mon[75120]: pgmap v756: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:00 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:00 compute-0 nova_compute[239038]: 2026-01-20 19:24:00.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:00 compute-0 nova_compute[239038]: 2026-01-20 19:24:00.705 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:24:00 compute-0 nova_compute[239038]: 2026-01-20 19:24:00.705 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:24:00 compute-0 nova_compute[239038]: 2026-01-20 19:24:00.706 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:24:00 compute-0 nova_compute[239038]: 2026-01-20 19:24:00.706 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:24:00 compute-0 nova_compute[239038]: 2026-01-20 19:24:00.706 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:24:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:24:01 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1715516207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.214 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.352 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.353 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5162MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.353 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.354 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.402 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.403 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.415 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:24:01 compute-0 ceph-mon[75120]: pgmap v757: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:01 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1715516207' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:24:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:24:01 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3999436920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.956 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.540s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.961 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.977 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.978 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:24:01 compute-0 nova_compute[239038]: 2026-01-20 19:24:01.978 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:24:02 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:02 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3999436920' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:24:03 compute-0 ceph-mon[75120]: pgmap v758: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:04 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:24:05.450 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:24:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:24:05.451 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:24:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:24:05.451 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:24:05 compute-0 ceph-mon[75120]: pgmap v759: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:06 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:07 compute-0 ceph-mon[75120]: pgmap v760: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:08 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:09 compute-0 ceph-mon[75120]: pgmap v761: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:10 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:10 compute-0 sudo[242072]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:10 compute-0 sudo[242072]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:10 compute-0 sudo[242072]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:10 compute-0 sudo[242097]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 check-host
Jan 20 19:24:10 compute-0 sudo[242097]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:10 compute-0 sudo[242097]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:24:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:10 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:24:10 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:11 compute-0 sudo[242141]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:11 compute-0 sudo[242141]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:11 compute-0 sudo[242141]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:11 compute-0 sudo[242166]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:24:11 compute-0 sudo[242166]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:11 compute-0 sudo[242166]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:24:11 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:24:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:24:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:24:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:24:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:24:11 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:24:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:24:11 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:24:11 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:24:11 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:24:11 compute-0 sudo[242223]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:11 compute-0 sudo[242223]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:11 compute-0 sudo[242223]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:11 compute-0 sudo[242248]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:24:11 compute-0 sudo[242248]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:11 compute-0 ceph-mon[75120]: pgmap v762: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:24:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:24:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:24:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:24:11 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:24:12 compute-0 podman[242285]: 2026-01-20 19:24:12.062380249 +0000 UTC m=+0.040520677 container create b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:24:12 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:12 compute-0 systemd[1]: Started libpod-conmon-b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77.scope.
Jan 20 19:24:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:12 compute-0 podman[242285]: 2026-01-20 19:24:12.13346871 +0000 UTC m=+0.111609158 container init b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 19:24:12 compute-0 podman[242285]: 2026-01-20 19:24:12.047130952 +0000 UTC m=+0.025271400 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:24:12 compute-0 podman[242285]: 2026-01-20 19:24:12.148640585 +0000 UTC m=+0.126781013 container start b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Jan 20 19:24:12 compute-0 podman[242285]: 2026-01-20 19:24:12.152004167 +0000 UTC m=+0.130144595 container attach b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:24:12 compute-0 elated_maxwell[242302]: 167 167
Jan 20 19:24:12 compute-0 podman[242285]: 2026-01-20 19:24:12.15919766 +0000 UTC m=+0.137338088 container died b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:24:12 compute-0 systemd[1]: libpod-b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77.scope: Deactivated successfully.
Jan 20 19:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-f2d3543d4146eef796ef3b5a92f240c8c0c40fd6965b9115a6c64e3a6fe559a2-merged.mount: Deactivated successfully.
Jan 20 19:24:12 compute-0 podman[242285]: 2026-01-20 19:24:12.1998753 +0000 UTC m=+0.178015728 container remove b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elated_maxwell, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:12 compute-0 systemd[1]: libpod-conmon-b71c286b61378d3e1929ca5f8ff3578e6ddf401d2934ca399be6c269a80fed77.scope: Deactivated successfully.
Jan 20 19:24:12 compute-0 podman[242325]: 2026-01-20 19:24:12.381524903 +0000 UTC m=+0.047117805 container create 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:12 compute-0 systemd[1]: Started libpod-conmon-783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf.scope.
Jan 20 19:24:12 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:12 compute-0 podman[242325]: 2026-01-20 19:24:12.362464815 +0000 UTC m=+0.028057767 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:24:12 compute-0 podman[242325]: 2026-01-20 19:24:12.471279215 +0000 UTC m=+0.136872147 container init 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Jan 20 19:24:12 compute-0 podman[242325]: 2026-01-20 19:24:12.479812611 +0000 UTC m=+0.145405513 container start 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:24:12 compute-0 podman[242325]: 2026-01-20 19:24:12.4839166 +0000 UTC m=+0.149509502 container attach 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:24:12 compute-0 loving_swartz[242341]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:24:12 compute-0 loving_swartz[242341]: --> All data devices are unavailable
Jan 20 19:24:12 compute-0 systemd[1]: libpod-783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf.scope: Deactivated successfully.
Jan 20 19:24:12 compute-0 podman[242325]: 2026-01-20 19:24:12.956022408 +0000 UTC m=+0.621615310 container died 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:12 compute-0 systemd[1]: var-lib-containers-storage-overlay-4dc7cd4bea1a54b21596aa86916c6b610eb3e1123efb02f848af5b178aac4e3f-merged.mount: Deactivated successfully.
Jan 20 19:24:12 compute-0 podman[242325]: 2026-01-20 19:24:12.99556254 +0000 UTC m=+0.661155442 container remove 783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=loving_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 20 19:24:13 compute-0 systemd[1]: libpod-conmon-783a4e87e98821988bf813c20eb06a22981b1777592c8b021d7474ec7ab52edf.scope: Deactivated successfully.
Jan 20 19:24:13 compute-0 sudo[242248]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:13 compute-0 sudo[242373]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:13 compute-0 sudo[242373]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:13 compute-0 sudo[242373]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:13 compute-0 sudo[242398]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:24:13 compute-0 sudo[242398]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:13 compute-0 podman[242435]: 2026-01-20 19:24:13.406075705 +0000 UTC m=+0.037525995 container create 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Jan 20 19:24:13 compute-0 systemd[1]: Started libpod-conmon-54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e.scope.
Jan 20 19:24:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:13 compute-0 podman[242435]: 2026-01-20 19:24:13.476532162 +0000 UTC m=+0.107982462 container init 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:24:13 compute-0 podman[242435]: 2026-01-20 19:24:13.483956081 +0000 UTC m=+0.115406361 container start 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:24:13 compute-0 podman[242435]: 2026-01-20 19:24:13.389034855 +0000 UTC m=+0.020485165 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:24:13 compute-0 podman[242435]: 2026-01-20 19:24:13.487113427 +0000 UTC m=+0.118563707 container attach 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:24:13 compute-0 kind_goldberg[242451]: 167 167
Jan 20 19:24:13 compute-0 systemd[1]: libpod-54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e.scope: Deactivated successfully.
Jan 20 19:24:13 compute-0 podman[242435]: 2026-01-20 19:24:13.490270813 +0000 UTC m=+0.121721153 container died 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:24:13 compute-0 systemd[1]: var-lib-containers-storage-overlay-3d91e8edc2fac5066a6ff7de221c49f344d9255be4a8a45fd2f6ec3a0c27c190-merged.mount: Deactivated successfully.
Jan 20 19:24:13 compute-0 podman[242435]: 2026-01-20 19:24:13.532545521 +0000 UTC m=+0.163995821 container remove 54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=kind_goldberg, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:13 compute-0 systemd[1]: libpod-conmon-54b56d8bd5fcec1b646ace86c355984970721222927c37b5d3fef95996f48f7e.scope: Deactivated successfully.
Jan 20 19:24:13 compute-0 podman[242475]: 2026-01-20 19:24:13.689551592 +0000 UTC m=+0.037668518 container create 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:24:13 compute-0 systemd[1]: Started libpod-conmon-5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f.scope.
Jan 20 19:24:13 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:13 compute-0 podman[242475]: 2026-01-20 19:24:13.759730372 +0000 UTC m=+0.107847318 container init 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:24:13 compute-0 podman[242475]: 2026-01-20 19:24:13.765396728 +0000 UTC m=+0.113513654 container start 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:13 compute-0 podman[242475]: 2026-01-20 19:24:13.767845027 +0000 UTC m=+0.115961953 container attach 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:24:13 compute-0 podman[242475]: 2026-01-20 19:24:13.67327233 +0000 UTC m=+0.021389256 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:24:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:13 compute-0 ceph-mon[75120]: pgmap v763: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:14 compute-0 nervous_colden[242492]: {
Jan 20 19:24:14 compute-0 nervous_colden[242492]:     "0": [
Jan 20 19:24:14 compute-0 nervous_colden[242492]:         {
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "devices": [
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "/dev/loop3"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             ],
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_name": "ceph_lv0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_size": "21470642176",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "name": "ceph_lv0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "tags": {
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cluster_name": "ceph",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.crush_device_class": "",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.encrypted": "0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.objectstore": "bluestore",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osd_id": "0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.type": "block",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.vdo": "0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.with_tpm": "0"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             },
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "type": "block",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "vg_name": "ceph_vg0"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:         }
Jan 20 19:24:14 compute-0 nervous_colden[242492]:     ],
Jan 20 19:24:14 compute-0 nervous_colden[242492]:     "1": [
Jan 20 19:24:14 compute-0 nervous_colden[242492]:         {
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "devices": [
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "/dev/loop4"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             ],
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_name": "ceph_lv1",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_size": "21470642176",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "name": "ceph_lv1",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "tags": {
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cluster_name": "ceph",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.crush_device_class": "",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.encrypted": "0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.objectstore": "bluestore",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osd_id": "1",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.type": "block",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.vdo": "0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.with_tpm": "0"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             },
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "type": "block",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "vg_name": "ceph_vg1"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:         }
Jan 20 19:24:14 compute-0 nervous_colden[242492]:     ],
Jan 20 19:24:14 compute-0 nervous_colden[242492]:     "2": [
Jan 20 19:24:14 compute-0 nervous_colden[242492]:         {
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "devices": [
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "/dev/loop5"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             ],
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_name": "ceph_lv2",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_size": "21470642176",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "name": "ceph_lv2",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "tags": {
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.cluster_name": "ceph",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.crush_device_class": "",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.encrypted": "0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.objectstore": "bluestore",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osd_id": "2",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.type": "block",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.vdo": "0",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:                 "ceph.with_tpm": "0"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             },
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "type": "block",
Jan 20 19:24:14 compute-0 nervous_colden[242492]:             "vg_name": "ceph_vg2"
Jan 20 19:24:14 compute-0 nervous_colden[242492]:         }
Jan 20 19:24:14 compute-0 nervous_colden[242492]:     ]
Jan 20 19:24:14 compute-0 nervous_colden[242492]: }
Jan 20 19:24:14 compute-0 systemd[1]: libpod-5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f.scope: Deactivated successfully.
Jan 20 19:24:14 compute-0 podman[242475]: 2026-01-20 19:24:14.056061127 +0000 UTC m=+0.404178053 container died 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-1a7cfc68948daa9b8393f9122158673458a4029e6d62b7e3c195cb6138c22af7-merged.mount: Deactivated successfully.
Jan 20 19:24:14 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:14 compute-0 podman[242475]: 2026-01-20 19:24:14.096553302 +0000 UTC m=+0.444670228 container remove 5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nervous_colden, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:14 compute-0 systemd[1]: libpod-conmon-5262acfef033e0552e7be9e0df1eee4984db07c6c30a8a8e0ff05bcfd2b95c7f.scope: Deactivated successfully.
Jan 20 19:24:14 compute-0 sudo[242398]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:14 compute-0 sudo[242514]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:24:14 compute-0 sudo[242514]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:14 compute-0 sudo[242514]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:14 compute-0 sudo[242539]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:24:14 compute-0 sudo[242539]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:14 compute-0 podman[242577]: 2026-01-20 19:24:14.545735069 +0000 UTC m=+0.045918507 container create 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:24:14 compute-0 systemd[1]: Started libpod-conmon-7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9.scope.
Jan 20 19:24:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:14 compute-0 podman[242577]: 2026-01-20 19:24:14.613721547 +0000 UTC m=+0.113904995 container init 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:24:14 compute-0 podman[242577]: 2026-01-20 19:24:14.52168083 +0000 UTC m=+0.021864308 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:24:14 compute-0 podman[242577]: 2026-01-20 19:24:14.621231787 +0000 UTC m=+0.121415215 container start 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:24:14 compute-0 podman[242577]: 2026-01-20 19:24:14.625250204 +0000 UTC m=+0.125433632 container attach 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Jan 20 19:24:14 compute-0 bold_wu[242594]: 167 167
Jan 20 19:24:14 compute-0 systemd[1]: libpod-7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9.scope: Deactivated successfully.
Jan 20 19:24:14 compute-0 podman[242577]: 2026-01-20 19:24:14.628087642 +0000 UTC m=+0.128271080 container died 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:24:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-da436940e6a717d6d99063ea972526cb48b80ad8dcaf62a6460ec128891441af-merged.mount: Deactivated successfully.
Jan 20 19:24:14 compute-0 podman[242577]: 2026-01-20 19:24:14.668232989 +0000 UTC m=+0.168416417 container remove 7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=bold_wu, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:24:14 compute-0 podman[242591]: 2026-01-20 19:24:14.670270918 +0000 UTC m=+0.089871215 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller)
Jan 20 19:24:14 compute-0 systemd[1]: libpod-conmon-7eb8b3a865917f35865e4284b1c9ef8664a657088a585f05ab6cf3af178f66c9.scope: Deactivated successfully.
Jan 20 19:24:14 compute-0 podman[242642]: 2026-01-20 19:24:14.837192927 +0000 UTC m=+0.043284463 container create e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030)
Jan 20 19:24:14 compute-0 systemd[1]: Started libpod-conmon-e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858.scope.
Jan 20 19:24:14 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:24:14 compute-0 podman[242642]: 2026-01-20 19:24:14.817222887 +0000 UTC m=+0.023314433 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:14 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:24:14 compute-0 podman[242642]: 2026-01-20 19:24:14.931470748 +0000 UTC m=+0.137562284 container init e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Jan 20 19:24:14 compute-0 podman[242642]: 2026-01-20 19:24:14.940023084 +0000 UTC m=+0.146114600 container start e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Jan 20 19:24:14 compute-0 podman[242642]: 2026-01-20 19:24:14.943351374 +0000 UTC m=+0.149443030 container attach e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True)
Jan 20 19:24:15 compute-0 lvm[242738]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:24:15 compute-0 lvm[242739]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:24:15 compute-0 lvm[242739]: VG ceph_vg1 finished
Jan 20 19:24:15 compute-0 lvm[242738]: VG ceph_vg0 finished
Jan 20 19:24:15 compute-0 lvm[242741]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:24:15 compute-0 lvm[242741]: VG ceph_vg2 finished
Jan 20 19:24:15 compute-0 nifty_neumann[242658]: {}
Jan 20 19:24:15 compute-0 systemd[1]: libpod-e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858.scope: Deactivated successfully.
Jan 20 19:24:15 compute-0 systemd[1]: libpod-e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858.scope: Consumed 1.245s CPU time.
Jan 20 19:24:15 compute-0 podman[242642]: 2026-01-20 19:24:15.715867637 +0000 UTC m=+0.921959153 container died e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:24:15 compute-0 systemd[1]: var-lib-containers-storage-overlay-b2d4756bb441ec1c6bf2771427429c395e14bfb109b38a8fbd1c1c9119071650-merged.mount: Deactivated successfully.
Jan 20 19:24:15 compute-0 podman[242642]: 2026-01-20 19:24:15.759245551 +0000 UTC m=+0.965337107 container remove e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nifty_neumann, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Jan 20 19:24:15 compute-0 systemd[1]: libpod-conmon-e39e61638a3c58da915189e1ef00f55dcd16018e8fbd55a32717596997007858.scope: Deactivated successfully.
Jan 20 19:24:15 compute-0 sudo[242539]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:24:15 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:24:15 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:15 compute-0 sudo[242757]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:24:15 compute-0 sudo[242757]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:24:15 compute-0 sudo[242757]: pam_unix(sudo:session): session closed for user root
Jan 20 19:24:15 compute-0 ceph-mon[75120]: pgmap v764: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:15 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:24:16 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:17 compute-0 podman[242782]: 2026-01-20 19:24:17.383888493 +0000 UTC m=+0.056099691 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:24:17 compute-0 ceph-mon[75120]: pgmap v765: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:18 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:20 compute-0 ceph-mon[75120]: pgmap v766: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:20 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:22 compute-0 ceph-mon[75120]: pgmap v767: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:22 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:24 compute-0 ceph-mon[75120]: pgmap v768: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:24 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 19:24:26 compute-0 ceph-mon[75120]: pgmap v769: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 0 B/s wr, 2 op/s
Jan 20 19:24:26 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Jan 20 19:24:27 compute-0 ceph-mon[75120]: pgmap v770: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Jan 20 19:24:28 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Jan 20 19:24:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:29 compute-0 ceph-mon[75120]: pgmap v771: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 42 op/s
Jan 20 19:24:30 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:24:31 compute-0 ceph-mon[75120]: pgmap v772: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:24:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:24:31
Jan 20 19:24:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:24:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:24:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.meta', '.mgr', 'images', 'volumes', 'vms', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log']
Jan 20 19:24:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:24:32 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:24:33 compute-0 ceph-mon[75120]: pgmap v773: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:24:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:24:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:24:35 compute-0 ceph-mon[75120]: pgmap v774: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Jan 20 19:24:36 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 20 19:24:37 compute-0 ceph-mon[75120]: pgmap v775: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 57 op/s
Jan 20 19:24:38 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 19:24:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:39 compute-0 ceph-mon[75120]: pgmap v776: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 19:24:40 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 19:24:41 compute-0 ceph-mon[75120]: pgmap v777: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 0 B/s wr, 16 op/s
Jan 20 19:24:42 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:43 compute-0 ceph-mon[75120]: pgmap v778: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:24:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:24:45 compute-0 ceph-mon[75120]: pgmap v779: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:45 compute-0 podman[242801]: 2026-01-20 19:24:45.430190297 +0000 UTC m=+0.102143901 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 20 19:24:46 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:47 compute-0 ceph-mon[75120]: pgmap v780: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:48 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:48 compute-0 podman[242827]: 2026-01-20 19:24:48.407304939 +0000 UTC m=+0.070348336 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 20 19:24:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:49 compute-0 ceph-mon[75120]: pgmap v781: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:24:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018499975' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:24:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:24:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4018499975' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:24:50 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/4018499975' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:24:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/4018499975' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:24:51 compute-0 ceph-mon[75120]: pgmap v782: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:52 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:53 compute-0 ceph-mon[75120]: pgmap v783: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:54 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:54 compute-0 nova_compute[239038]: 2026-01-20 19:24:54.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:54 compute-0 nova_compute[239038]: 2026-01-20 19:24:54.684 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 20 19:24:54 compute-0 nova_compute[239038]: 2026-01-20 19:24:54.703 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 20 19:24:54 compute-0 nova_compute[239038]: 2026-01-20 19:24:54.705 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:54 compute-0 nova_compute[239038]: 2026-01-20 19:24:54.705 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 20 19:24:54 compute-0 nova_compute[239038]: 2026-01-20 19:24:54.720 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:55 compute-0 ceph-mon[75120]: pgmap v784: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:55 compute-0 nova_compute[239038]: 2026-01-20 19:24:55.733 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:56 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:57 compute-0 ceph-mon[75120]: pgmap v785: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:57 compute-0 nova_compute[239038]: 2026-01-20 19:24:57.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:58 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:58 compute-0 nova_compute[239038]: 2026-01-20 19:24:58.677 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:58 compute-0 nova_compute[239038]: 2026-01-20 19:24:58.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:58 compute-0 nova_compute[239038]: 2026-01-20 19:24:58.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:24:58 compute-0 nova_compute[239038]: 2026-01-20 19:24:58.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:24:58 compute-0 nova_compute[239038]: 2026-01-20 19:24:58.696 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:24:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:24:59 compute-0 ceph-mon[75120]: pgmap v786: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:24:59 compute-0 nova_compute[239038]: 2026-01-20 19:24:59.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:59 compute-0 nova_compute[239038]: 2026-01-20 19:24:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:59 compute-0 nova_compute[239038]: 2026-01-20 19:24:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:59 compute-0 nova_compute[239038]: 2026-01-20 19:24:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:24:59 compute-0 nova_compute[239038]: 2026-01-20 19:24:59.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:25:00 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:00 compute-0 nova_compute[239038]: 2026-01-20 19:25:00.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:00 compute-0 nova_compute[239038]: 2026-01-20 19:25:00.707 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:25:00 compute-0 nova_compute[239038]: 2026-01-20 19:25:00.707 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:25:00 compute-0 nova_compute[239038]: 2026-01-20 19:25:00.708 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:25:00 compute-0 nova_compute[239038]: 2026-01-20 19:25:00.708 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:25:00 compute-0 nova_compute[239038]: 2026-01-20 19:25:00.708 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:25:01 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:25:01 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/689520366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.269 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.561s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:25:01 compute-0 ceph-mon[75120]: pgmap v787: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:01 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/689520366' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.429 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.430 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5159MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.430 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.430 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.622 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.623 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.700 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Refreshing inventories for resource provider 178956bf-6050-42b7-876f-3f96271cf4ff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.765 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Updating ProviderTree inventory for provider 178956bf-6050-42b7-876f-3f96271cf4ff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.765 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Updating inventory in ProviderTree for provider 178956bf-6050-42b7-876f-3f96271cf4ff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.782 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Refreshing aggregate associations for resource provider 178956bf-6050-42b7-876f-3f96271cf4ff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.805 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Refreshing trait associations for resource provider 178956bf-6050-42b7-876f-3f96271cf4ff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_AVX2,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,COMPUTE_NODE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_ABM,HW_CPU_X86_SSE42,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_STORAGE_BUS_USB,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_AESNI,COMPUTE_DEVICE_TAGGING,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_BMI2,HW_CPU_X86_SSE41,COMPUTE_RESCUE_BFV,HW_CPU_X86_SVM,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_BMI,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_AMD_SVM,HW_CPU_X86_F16C,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE2,COMPUTE_VOLUME_EXTEND,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_FDC,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_SSSE3,COMPUTE_ACCELERATORS,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_AVX,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE4A,HW_CPU_X86_MMX _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 20 19:25:01 compute-0 nova_compute[239038]: 2026-01-20 19:25:01.819 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:25:02 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:25:02 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3318937435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:25:02 compute-0 nova_compute[239038]: 2026-01-20 19:25:02.342 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:25:02 compute-0 nova_compute[239038]: 2026-01-20 19:25:02.348 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:25:02 compute-0 nova_compute[239038]: 2026-01-20 19:25:02.363 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:25:02 compute-0 nova_compute[239038]: 2026-01-20 19:25:02.365 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:25:02 compute-0 nova_compute[239038]: 2026-01-20 19:25:02.366 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.935s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:25:03 compute-0 ceph-mon[75120]: pgmap v788: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:03 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3318937435' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:25:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:04 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:05 compute-0 ceph-mon[75120]: pgmap v789: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:25:05.452 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:25:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:25:05.453 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:25:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:25:05.453 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:25:06 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:07 compute-0 ceph-mon[75120]: pgmap v790: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:08 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:09 compute-0 ceph-mon[75120]: pgmap v791: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:10 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:11 compute-0 ceph-mon[75120]: pgmap v792: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:12 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:13 compute-0 ceph-mon[75120]: pgmap v793: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:14 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:15 compute-0 sudo[242890]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:25:15 compute-0 sudo[242890]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:15 compute-0 sudo[242890]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:15 compute-0 ceph-mon[75120]: pgmap v794: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:16 compute-0 sudo[242921]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:25:16 compute-0 sudo[242921]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:16 compute-0 podman[242914]: 2026-01-20 19:25:16.055565689 +0000 UTC m=+0.081907952 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:25:16 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:16 compute-0 sudo[242921]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Jan 20 19:25:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 19:25:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:25:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:25:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:25:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:25:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:25:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:25:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:25:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:25:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:25:16 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:25:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:25:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:25:16 compute-0 sudo[242997]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:25:16 compute-0 sudo[242997]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:16 compute-0 sudo[242997]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:16 compute-0 sudo[243022]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:25:16 compute-0 sudo[243022]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:16 compute-0 podman[243059]: 2026-01-20 19:25:16.967788771 +0000 UTC m=+0.036190877 container create e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:25:17 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} : dispatch
Jan 20 19:25:17 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:25:17 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:25:17 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:25:17 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:25:17 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:25:17 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:25:17 compute-0 systemd[1]: Started libpod-conmon-e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7.scope.
Jan 20 19:25:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:17 compute-0 podman[243059]: 2026-01-20 19:25:16.951799494 +0000 UTC m=+0.020201620 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:25:17 compute-0 podman[243059]: 2026-01-20 19:25:17.053173927 +0000 UTC m=+0.121576033 container init e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:25:17 compute-0 podman[243059]: 2026-01-20 19:25:17.05988158 +0000 UTC m=+0.128283686 container start e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, org.label-schema.build-date=20251030, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:25:17 compute-0 podman[243059]: 2026-01-20 19:25:17.063253991 +0000 UTC m=+0.131656127 container attach e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:25:17 compute-0 sharp_kowalevski[243075]: 167 167
Jan 20 19:25:17 compute-0 systemd[1]: libpod-e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7.scope: Deactivated successfully.
Jan 20 19:25:17 compute-0 conmon[243075]: conmon e4d058f7ea15ff03fde6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7.scope/container/memory.events
Jan 20 19:25:17 compute-0 podman[243059]: 2026-01-20 19:25:17.068442257 +0000 UTC m=+0.136844383 container died e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Jan 20 19:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-e05b5868311faf661fc6089df37e4e96d2055abd2e58bde4bb8aa290f84923be-merged.mount: Deactivated successfully.
Jan 20 19:25:17 compute-0 podman[243059]: 2026-01-20 19:25:17.110926394 +0000 UTC m=+0.179328500 container remove e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_kowalevski, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:25:17 compute-0 systemd[1]: libpod-conmon-e4d058f7ea15ff03fde60c78af89e9acac7667bf863137ebb72e797b588c35c7.scope: Deactivated successfully.
Jan 20 19:25:17 compute-0 podman[243100]: 2026-01-20 19:25:17.258269419 +0000 UTC m=+0.037982160 container create fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 19:25:17 compute-0 systemd[1]: Started libpod-conmon-fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd.scope.
Jan 20 19:25:17 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:17 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:17 compute-0 podman[243100]: 2026-01-20 19:25:17.328672483 +0000 UTC m=+0.108385234 container init fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:25:17 compute-0 podman[243100]: 2026-01-20 19:25:17.336797999 +0000 UTC m=+0.116510750 container start fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:25:17 compute-0 podman[243100]: 2026-01-20 19:25:17.243229576 +0000 UTC m=+0.022942337 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:25:17 compute-0 podman[243100]: 2026-01-20 19:25:17.339670699 +0000 UTC m=+0.119383460 container attach fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:25:17 compute-0 musing_bhaskara[243117]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:25:17 compute-0 musing_bhaskara[243117]: --> All data devices are unavailable
Jan 20 19:25:17 compute-0 systemd[1]: libpod-fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd.scope: Deactivated successfully.
Jan 20 19:25:17 compute-0 podman[243100]: 2026-01-20 19:25:17.815854801 +0000 UTC m=+0.595567562 container died fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Jan 20 19:25:17 compute-0 systemd[1]: var-lib-containers-storage-overlay-d399ca3e7ae6ee35af6d8b1362fec367fd62cf3013878573ec0a2c8a71ebe3eb-merged.mount: Deactivated successfully.
Jan 20 19:25:17 compute-0 podman[243100]: 2026-01-20 19:25:17.856970035 +0000 UTC m=+0.636682786 container remove fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_bhaskara, ceph=True, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:25:17 compute-0 systemd[1]: libpod-conmon-fbb0bed2d2c5bf12ee600458d2c6e4321c4eff084a1d04bae783a3c637777dcd.scope: Deactivated successfully.
Jan 20 19:25:17 compute-0 sudo[243022]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:17 compute-0 sudo[243151]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:25:17 compute-0 sudo[243151]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:17 compute-0 sudo[243151]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:18 compute-0 ceph-mon[75120]: pgmap v795: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:18 compute-0 sudo[243176]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:25:18 compute-0 sudo[243176]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:18 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:18 compute-0 podman[243213]: 2026-01-20 19:25:18.355730472 +0000 UTC m=+0.058205238 container create bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:25:18 compute-0 systemd[1]: Started libpod-conmon-bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420.scope.
Jan 20 19:25:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:18 compute-0 podman[243213]: 2026-01-20 19:25:18.336638331 +0000 UTC m=+0.039113077 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:25:18 compute-0 podman[243213]: 2026-01-20 19:25:18.44822815 +0000 UTC m=+0.150702906 container init bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:25:18 compute-0 podman[243213]: 2026-01-20 19:25:18.45937557 +0000 UTC m=+0.161850306 container start bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 20 19:25:18 compute-0 podman[243213]: 2026-01-20 19:25:18.463009078 +0000 UTC m=+0.165483804 container attach bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, org.label-schema.build-date=20251030, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:25:18 compute-0 magical_payne[243229]: 167 167
Jan 20 19:25:18 compute-0 podman[243213]: 2026-01-20 19:25:18.467015925 +0000 UTC m=+0.169490661 container died bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:25:18 compute-0 systemd[1]: libpod-bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420.scope: Deactivated successfully.
Jan 20 19:25:18 compute-0 systemd[1]: var-lib-containers-storage-overlay-df19cd1b1a96b1f354a43aa808ac093fa33c7466073ffaa43df74474b281a892-merged.mount: Deactivated successfully.
Jan 20 19:25:18 compute-0 podman[243213]: 2026-01-20 19:25:18.507690499 +0000 UTC m=+0.210165225 container remove bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=magical_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:25:18 compute-0 systemd[1]: libpod-conmon-bff212ffe9472719996951f25f2575cbaf838dec06072040d6fd971cd4aa1420.scope: Deactivated successfully.
Jan 20 19:25:18 compute-0 podman[243232]: 2026-01-20 19:25:18.543786613 +0000 UTC m=+0.085697775 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 20 19:25:18 compute-0 podman[243271]: 2026-01-20 19:25:18.673630734 +0000 UTC m=+0.040962872 container create 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 19:25:18 compute-0 systemd[1]: Started libpod-conmon-4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7.scope.
Jan 20 19:25:18 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:18 compute-0 podman[243271]: 2026-01-20 19:25:18.656459629 +0000 UTC m=+0.023791787 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:18 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:18 compute-0 podman[243271]: 2026-01-20 19:25:18.758278482 +0000 UTC m=+0.125610650 container init 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030)
Jan 20 19:25:18 compute-0 podman[243271]: 2026-01-20 19:25:18.772735992 +0000 UTC m=+0.140068140 container start 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Jan 20 19:25:18 compute-0 podman[243271]: 2026-01-20 19:25:18.776043622 +0000 UTC m=+0.143375770 container attach 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:25:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]: {
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:     "0": [
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:         {
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "devices": [
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "/dev/loop3"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             ],
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_name": "ceph_lv0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_size": "21470642176",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "name": "ceph_lv0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "tags": {
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cluster_name": "ceph",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.crush_device_class": "",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.encrypted": "0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.objectstore": "bluestore",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osd_id": "0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.type": "block",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.vdo": "0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.with_tpm": "0"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             },
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "type": "block",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "vg_name": "ceph_vg0"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:         }
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:     ],
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:     "1": [
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:         {
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "devices": [
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "/dev/loop4"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             ],
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_name": "ceph_lv1",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_size": "21470642176",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "name": "ceph_lv1",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "tags": {
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cluster_name": "ceph",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.crush_device_class": "",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.encrypted": "0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.objectstore": "bluestore",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osd_id": "1",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.type": "block",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.vdo": "0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.with_tpm": "0"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             },
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "type": "block",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "vg_name": "ceph_vg1"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:         }
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:     ],
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:     "2": [
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:         {
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "devices": [
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "/dev/loop5"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             ],
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_name": "ceph_lv2",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_size": "21470642176",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "name": "ceph_lv2",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "tags": {
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.cluster_name": "ceph",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.crush_device_class": "",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.encrypted": "0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.objectstore": "bluestore",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osd_id": "2",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.type": "block",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.vdo": "0",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:                 "ceph.with_tpm": "0"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             },
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "type": "block",
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:             "vg_name": "ceph_vg2"
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:         }
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]:     ]
Jan 20 19:25:19 compute-0 nostalgic_blackwell[243288]: }
Jan 20 19:25:19 compute-0 systemd[1]: libpod-4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7.scope: Deactivated successfully.
Jan 20 19:25:19 compute-0 podman[243297]: 2026-01-20 19:25:19.1376036 +0000 UTC m=+0.030783075 container died 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:25:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-42cd8963922de229c8308cac45d0a2136b3046defca06056ca921d8676d924a2-merged.mount: Deactivated successfully.
Jan 20 19:25:19 compute-0 podman[243297]: 2026-01-20 19:25:19.172744421 +0000 UTC m=+0.065923896 container remove 4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_blackwell, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:25:19 compute-0 systemd[1]: libpod-conmon-4de4228f2ae1c4b88fc3a49788c97c3a3d6edc7bdcb559f76bfaf36d2ce40df7.scope: Deactivated successfully.
Jan 20 19:25:19 compute-0 sudo[243176]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:19 compute-0 sudo[243312]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:25:19 compute-0 sudo[243312]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:19 compute-0 sudo[243312]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:19 compute-0 sudo[243337]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:25:19 compute-0 sudo[243337]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:19 compute-0 podman[243375]: 2026-01-20 19:25:19.627536144 +0000 UTC m=+0.037826516 container create 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 20 19:25:19 compute-0 systemd[1]: Started libpod-conmon-7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a.scope.
Jan 20 19:25:19 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:19 compute-0 podman[243375]: 2026-01-20 19:25:19.61207898 +0000 UTC m=+0.022369372 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:25:19 compute-0 podman[243375]: 2026-01-20 19:25:19.718821073 +0000 UTC m=+0.129111485 container init 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:25:19 compute-0 podman[243375]: 2026-01-20 19:25:19.730167067 +0000 UTC m=+0.140457439 container start 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:25:19 compute-0 podman[243375]: 2026-01-20 19:25:19.734018191 +0000 UTC m=+0.144308603 container attach 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 19:25:19 compute-0 distracted_ritchie[243392]: 167 167
Jan 20 19:25:19 compute-0 systemd[1]: libpod-7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a.scope: Deactivated successfully.
Jan 20 19:25:19 compute-0 podman[243375]: 2026-01-20 19:25:19.73485535 +0000 UTC m=+0.145145712 container died 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:25:19 compute-0 systemd[1]: var-lib-containers-storage-overlay-3ffdd10b4a27aec843de14b22ef3b9f083893be91e51eaee06eae81dd6b671b1-merged.mount: Deactivated successfully.
Jan 20 19:25:19 compute-0 podman[243375]: 2026-01-20 19:25:19.773313932 +0000 UTC m=+0.183604284 container remove 7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=distracted_ritchie, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:25:19 compute-0 systemd[1]: libpod-conmon-7b113416eda9c0774e71d251f2f1d083eae1b1d6df6289898b35ad9f3ae3cb6a.scope: Deactivated successfully.
Jan 20 19:25:19 compute-0 podman[243416]: 2026-01-20 19:25:19.957938119 +0000 UTC m=+0.054296175 container create a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:25:19 compute-0 systemd[1]: Started libpod-conmon-a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528.scope.
Jan 20 19:25:20 compute-0 ceph-mon[75120]: pgmap v796: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:20 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:20 compute-0 podman[243416]: 2026-01-20 19:25:19.929745606 +0000 UTC m=+0.026103762 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:25:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:25:20 compute-0 podman[243416]: 2026-01-20 19:25:20.035203338 +0000 UTC m=+0.131561444 container init a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, OSD_FLAVOR=default)
Jan 20 19:25:20 compute-0 podman[243416]: 2026-01-20 19:25:20.046131732 +0000 UTC m=+0.142489788 container start a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:25:20 compute-0 podman[243416]: 2026-01-20 19:25:20.049026862 +0000 UTC m=+0.145384918 container attach a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:25:20 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:20 compute-0 lvm[243511]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:25:20 compute-0 lvm[243511]: VG ceph_vg1 finished
Jan 20 19:25:20 compute-0 lvm[243510]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:25:20 compute-0 lvm[243510]: VG ceph_vg0 finished
Jan 20 19:25:20 compute-0 lvm[243513]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:25:20 compute-0 lvm[243513]: VG ceph_vg2 finished
Jan 20 19:25:20 compute-0 sharp_mendeleev[243432]: {}
Jan 20 19:25:20 compute-0 systemd[1]: libpod-a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528.scope: Deactivated successfully.
Jan 20 19:25:20 compute-0 systemd[1]: libpod-a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528.scope: Consumed 1.416s CPU time.
Jan 20 19:25:20 compute-0 podman[243416]: 2026-01-20 19:25:20.929434283 +0000 UTC m=+1.025792359 container died a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Jan 20 19:25:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-29874bbefc423cc6ff5fc398ef8f5cd8e7462550196467c10d6c639294933a9b-merged.mount: Deactivated successfully.
Jan 20 19:25:20 compute-0 podman[243416]: 2026-01-20 19:25:20.972623698 +0000 UTC m=+1.068981754 container remove a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sharp_mendeleev, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Jan 20 19:25:20 compute-0 systemd[1]: libpod-conmon-a1dac8c202feb55121268427c3ec1c6564722c4c6d44d0f0dce98b2566290528.scope: Deactivated successfully.
Jan 20 19:25:21 compute-0 sudo[243337]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:25:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:25:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:25:21 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:25:21 compute-0 sudo[243527]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:25:21 compute-0 sudo[243527]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:25:21 compute-0 sudo[243527]: pam_unix(sudo:session): session closed for user root
Jan 20 19:25:22 compute-0 ceph-mon[75120]: pgmap v797: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:25:22 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:25:22 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:24 compute-0 ceph-mon[75120]: pgmap v798: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:24 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:26 compute-0 ceph-mon[75120]: pgmap v799: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:26 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:27 compute-0 ceph-mon[75120]: pgmap v800: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:28 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:29 compute-0 ceph-mon[75120]: pgmap v801: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:30 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:31 compute-0 ceph-mon[75120]: pgmap v802: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:25:31
Jan 20 19:25:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:25:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:25:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'images', '.mgr', 'volumes', 'cephfs.cephfs.meta', 'vms', '.rgw.root']
Jan 20 19:25:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:25:32 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:33 compute-0 ceph-mon[75120]: pgmap v803: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:25:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:25:35 compute-0 sshd-session[243552]: Invalid user ubuntu from 45.148.10.240 port 37194
Jan 20 19:25:35 compute-0 sshd-session[243552]: Connection closed by invalid user ubuntu 45.148.10.240 port 37194 [preauth]
Jan 20 19:25:35 compute-0 ceph-mon[75120]: pgmap v804: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:36 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:37 compute-0 ceph-mon[75120]: pgmap v805: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:38 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:39 compute-0 ceph-mon[75120]: pgmap v806: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:40 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:41 compute-0 ceph-mon[75120]: pgmap v807: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:42 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:43 compute-0 ceph-mon[75120]: pgmap v808: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:25:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:25:45 compute-0 ceph-mon[75120]: pgmap v809: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:46 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:46 compute-0 podman[243554]: 2026-01-20 19:25:46.458262875 +0000 UTC m=+0.130471287 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 20 19:25:47 compute-0 ceph-mon[75120]: pgmap v810: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:48 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:49 compute-0 ceph-mon[75120]: pgmap v811: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:49 compute-0 podman[243581]: 2026-01-20 19:25:49.391495045 +0000 UTC m=+0.072177706 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Jan 20 19:25:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:25:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103715483' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:25:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:25:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/103715483' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:25:50 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/103715483' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:25:50 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/103715483' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:25:51 compute-0 ceph-mon[75120]: pgmap v812: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:52 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:53 compute-0 ceph-mon[75120]: pgmap v813: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:54 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:55 compute-0 ceph-mon[75120]: pgmap v814: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:56 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:57 compute-0 ceph-mon[75120]: pgmap v815: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:58 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:25:59 compute-0 ceph-mon[75120]: pgmap v816: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.365 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.365 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.365 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.379 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.380 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.381 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:25:59 compute-0 nova_compute[239038]: 2026-01-20 19:25:59.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:00 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:00 compute-0 nova_compute[239038]: 2026-01-20 19:26:00.676 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:01 compute-0 ceph-mon[75120]: pgmap v817: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.717 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.717 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.718 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.718 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:26:01 compute-0 nova_compute[239038]: 2026-01-20 19:26:01.718 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:26:02 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:26:02 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/98757914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:26:02 compute-0 nova_compute[239038]: 2026-01-20 19:26:02.219 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:26:02 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/98757914' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:26:02 compute-0 nova_compute[239038]: 2026-01-20 19:26:02.388 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:26:02 compute-0 nova_compute[239038]: 2026-01-20 19:26:02.389 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5163MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:26:02 compute-0 nova_compute[239038]: 2026-01-20 19:26:02.389 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:26:02 compute-0 nova_compute[239038]: 2026-01-20 19:26:02.390 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:26:02 compute-0 nova_compute[239038]: 2026-01-20 19:26:02.444 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:26:02 compute-0 nova_compute[239038]: 2026-01-20 19:26:02.444 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:26:02 compute-0 nova_compute[239038]: 2026-01-20 19:26:02.458 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:26:02 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:26:02 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1778637990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:26:03 compute-0 nova_compute[239038]: 2026-01-20 19:26:03.018 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:26:03 compute-0 nova_compute[239038]: 2026-01-20 19:26:03.025 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:26:03 compute-0 nova_compute[239038]: 2026-01-20 19:26:03.044 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:26:03 compute-0 nova_compute[239038]: 2026-01-20 19:26:03.047 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:26:03 compute-0 nova_compute[239038]: 2026-01-20 19:26:03.047 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:26:03 compute-0 ceph-mon[75120]: pgmap v818: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:03 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1778637990' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:26:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:04 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:05 compute-0 ceph-mon[75120]: pgmap v819: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:26:05.453 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:26:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:26:05.454 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:26:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:26:05.454 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:26:06 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:07 compute-0 ceph-mon[75120]: pgmap v820: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:08 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:09 compute-0 ceph-mon[75120]: pgmap v821: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:10 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:11 compute-0 ceph-mon[75120]: pgmap v822: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:12 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:13 compute-0 ceph-mon[75120]: pgmap v823: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:14 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:15 compute-0 ceph-mon[75120]: pgmap v824: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:16 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:17 compute-0 ceph-mon[75120]: pgmap v825: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:17 compute-0 podman[243644]: 2026-01-20 19:26:17.427139312 +0000 UTC m=+0.104489080 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 20 19:26:18 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:19 compute-0 ceph-mon[75120]: pgmap v826: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:20 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:20 compute-0 podman[243670]: 2026-01-20 19:26:20.399254523 +0000 UTC m=+0.075382015 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:26:21 compute-0 ceph-mon[75120]: pgmap v827: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:21 compute-0 sudo[243691]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:21 compute-0 sudo[243691]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:21 compute-0 sudo[243691]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:22 compute-0 sudo[243716]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ls
Jan 20 19:26:22 compute-0 sudo[243716]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:22 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:22 compute-0 podman[243786]: 2026-01-20 19:26:22.441423233 +0000 UTC m=+0.065188438 container exec b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:26:22 compute-0 podman[243786]: 2026-01-20 19:26:22.54172916 +0000 UTC m=+0.165494365 container exec_died b5c99f106188b5bdc0bcc92c455e7f0c2e845e202329b6c8107df3432fccf681 (image=quay.io/ceph/ceph:v20, name=ceph-90fff835-31df-513f-a409-b6642f04e6ac-mon-compute-0, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:26:23 compute-0 sudo[243716]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:23 compute-0 sudo[243969]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:23 compute-0 sudo[243969]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:23 compute-0 sudo[243969]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:23 compute-0 sudo[243994]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:26:23 compute-0 sudo[243994]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:23 compute-0 sudo[243994]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:26:23 compute-0 ceph-mon[75120]: pgmap v828: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:23 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:23 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:23 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:26:23 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:26:23 compute-0 sudo[244050]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:23 compute-0 sudo[244050]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:23 compute-0 sudo[244050]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:23 compute-0 sudo[244075]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:26:23 compute-0 sudo[244075]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:24 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:24 compute-0 podman[244113]: 2026-01-20 19:26:24.255353651 +0000 UTC m=+0.038765829 container create 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:26:24 compute-0 systemd[1]: Started libpod-conmon-517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297.scope.
Jan 20 19:26:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:24 compute-0 podman[244113]: 2026-01-20 19:26:24.33343844 +0000 UTC m=+0.116850648 container init 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030)
Jan 20 19:26:24 compute-0 podman[244113]: 2026-01-20 19:26:24.239380914 +0000 UTC m=+0.022793112 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:26:24 compute-0 podman[244113]: 2026-01-20 19:26:24.33918665 +0000 UTC m=+0.122598828 container start 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle)
Jan 20 19:26:24 compute-0 podman[244113]: 2026-01-20 19:26:24.342655583 +0000 UTC m=+0.126067761 container attach 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 19:26:24 compute-0 admiring_swirles[244129]: 167 167
Jan 20 19:26:24 compute-0 systemd[1]: libpod-517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297.scope: Deactivated successfully.
Jan 20 19:26:24 compute-0 conmon[244129]: conmon 517305a995d3d09ec60c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297.scope/container/memory.events
Jan 20 19:26:24 compute-0 podman[244113]: 2026-01-20 19:26:24.345155604 +0000 UTC m=+0.128567802 container died 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:26:24 compute-0 systemd[1]: var-lib-containers-storage-overlay-a90d5d639e583dfee5fe97d90d244603e43dacdea7f249203ae3240dc2ae22bb-merged.mount: Deactivated successfully.
Jan 20 19:26:24 compute-0 podman[244113]: 2026-01-20 19:26:24.437137689 +0000 UTC m=+0.220549867 container remove 517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=admiring_swirles, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.41.3, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:26:24 compute-0 systemd[1]: libpod-conmon-517305a995d3d09ec60c17fe53ff933491a688d6c8c9b40c80aa8df58ceb5297.scope: Deactivated successfully.
Jan 20 19:26:24 compute-0 podman[244153]: 2026-01-20 19:26:24.576756757 +0000 UTC m=+0.036691979 container create cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:26:24 compute-0 systemd[1]: Started libpod-conmon-cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2.scope.
Jan 20 19:26:24 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:24 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:24 compute-0 podman[244153]: 2026-01-20 19:26:24.64754983 +0000 UTC m=+0.107485062 container init cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.build-date=20251030, ceph=True, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:26:24 compute-0 podman[244153]: 2026-01-20 19:26:24.654825626 +0000 UTC m=+0.114760858 container start cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, OSD_FLAVOR=default, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:26:24 compute-0 podman[244153]: 2026-01-20 19:26:24.562452411 +0000 UTC m=+0.022387663 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:26:24 compute-0 podman[244153]: 2026-01-20 19:26:24.658449394 +0000 UTC m=+0.118384626 container attach cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Jan 20 19:26:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:26:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:26:24 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:26:25 compute-0 elastic_sanderson[244171]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:26:25 compute-0 elastic_sanderson[244171]: --> All data devices are unavailable
Jan 20 19:26:25 compute-0 systemd[1]: libpod-cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2.scope: Deactivated successfully.
Jan 20 19:26:25 compute-0 podman[244191]: 2026-01-20 19:26:25.160867159 +0000 UTC m=+0.024151575 container died cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.build-date=20251030)
Jan 20 19:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-d0586f776c0d42788629a66b3a2cc47e60ecdd7d4627c7e1c084fea11a7a40bf-merged.mount: Deactivated successfully.
Jan 20 19:26:25 compute-0 podman[244191]: 2026-01-20 19:26:25.195569159 +0000 UTC m=+0.058853545 container remove cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=elastic_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030)
Jan 20 19:26:25 compute-0 systemd[1]: libpod-conmon-cae67269f119d44bd804a8c93eb4f55c0cf949409f90aa1cc83ff8c6f3e757c2.scope: Deactivated successfully.
Jan 20 19:26:25 compute-0 sudo[244075]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:25 compute-0 sudo[244206]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:25 compute-0 sudo[244206]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:25 compute-0 sudo[244206]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:25 compute-0 sudo[244231]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:26:25 compute-0 sudo[244231]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:25 compute-0 podman[244269]: 2026-01-20 19:26:25.617755225 +0000 UTC m=+0.038234877 container create 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 20 19:26:25 compute-0 systemd[1]: Started libpod-conmon-2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1.scope.
Jan 20 19:26:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:25 compute-0 podman[244269]: 2026-01-20 19:26:25.682223104 +0000 UTC m=+0.102702776 container init 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3)
Jan 20 19:26:25 compute-0 podman[244269]: 2026-01-20 19:26:25.687380198 +0000 UTC m=+0.107859850 container start 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Jan 20 19:26:25 compute-0 podman[244269]: 2026-01-20 19:26:25.69071063 +0000 UTC m=+0.111190282 container attach 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3)
Jan 20 19:26:25 compute-0 condescending_hodgkin[244285]: 167 167
Jan 20 19:26:25 compute-0 systemd[1]: libpod-2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1.scope: Deactivated successfully.
Jan 20 19:26:25 compute-0 podman[244269]: 2026-01-20 19:26:25.692104613 +0000 UTC m=+0.112584265 container died 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, CEPH_REF=tentacle, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 20 19:26:25 compute-0 podman[244269]: 2026-01-20 19:26:25.601238805 +0000 UTC m=+0.021718467 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:26:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-424d49f4c66655cb6d10ff89431f2c9fa980ace198d12db723d0630e02f0d00e-merged.mount: Deactivated successfully.
Jan 20 19:26:25 compute-0 podman[244269]: 2026-01-20 19:26:25.726658079 +0000 UTC m=+0.147137741 container remove 2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=condescending_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=tentacle)
Jan 20 19:26:25 compute-0 systemd[1]: libpod-conmon-2d5c202422262d8d17f5678abf033bd07dfe54ce5359bfe19ff941a2ba0f9ff1.scope: Deactivated successfully.
Jan 20 19:26:25 compute-0 podman[244308]: 2026-01-20 19:26:25.870455578 +0000 UTC m=+0.037337714 container create 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 20 19:26:25 compute-0 ceph-mon[75120]: pgmap v829: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:25 compute-0 systemd[1]: Started libpod-conmon-8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5.scope.
Jan 20 19:26:25 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:25 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:25 compute-0 podman[244308]: 2026-01-20 19:26:25.949018559 +0000 UTC m=+0.115900695 container init 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3)
Jan 20 19:26:25 compute-0 podman[244308]: 2026-01-20 19:26:25.854335509 +0000 UTC m=+0.021217655 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:26:25 compute-0 podman[244308]: 2026-01-20 19:26:25.96102765 +0000 UTC m=+0.127909786 container start 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 20 19:26:25 compute-0 podman[244308]: 2026-01-20 19:26:25.964642427 +0000 UTC m=+0.131524583 container attach 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:26:26 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:26 compute-0 frosty_edison[244324]: {
Jan 20 19:26:26 compute-0 frosty_edison[244324]:     "0": [
Jan 20 19:26:26 compute-0 frosty_edison[244324]:         {
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "devices": [
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "/dev/loop3"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             ],
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_name": "ceph_lv0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_size": "21470642176",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "name": "ceph_lv0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "tags": {
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cluster_name": "ceph",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.crush_device_class": "",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.encrypted": "0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.objectstore": "bluestore",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osd_id": "0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.type": "block",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.vdo": "0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.with_tpm": "0"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             },
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "type": "block",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "vg_name": "ceph_vg0"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:         }
Jan 20 19:26:26 compute-0 frosty_edison[244324]:     ],
Jan 20 19:26:26 compute-0 frosty_edison[244324]:     "1": [
Jan 20 19:26:26 compute-0 frosty_edison[244324]:         {
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "devices": [
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "/dev/loop4"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             ],
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_name": "ceph_lv1",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_size": "21470642176",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "name": "ceph_lv1",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "tags": {
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cluster_name": "ceph",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.crush_device_class": "",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.encrypted": "0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.objectstore": "bluestore",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osd_id": "1",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.type": "block",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.vdo": "0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.with_tpm": "0"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             },
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "type": "block",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "vg_name": "ceph_vg1"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:         }
Jan 20 19:26:26 compute-0 frosty_edison[244324]:     ],
Jan 20 19:26:26 compute-0 frosty_edison[244324]:     "2": [
Jan 20 19:26:26 compute-0 frosty_edison[244324]:         {
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "devices": [
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "/dev/loop5"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             ],
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_name": "ceph_lv2",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_size": "21470642176",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "name": "ceph_lv2",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "tags": {
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.cluster_name": "ceph",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.crush_device_class": "",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.encrypted": "0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.objectstore": "bluestore",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osd_id": "2",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.type": "block",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.vdo": "0",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:                 "ceph.with_tpm": "0"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             },
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "type": "block",
Jan 20 19:26:26 compute-0 frosty_edison[244324]:             "vg_name": "ceph_vg2"
Jan 20 19:26:26 compute-0 frosty_edison[244324]:         }
Jan 20 19:26:26 compute-0 frosty_edison[244324]:     ]
Jan 20 19:26:26 compute-0 frosty_edison[244324]: }
Jan 20 19:26:26 compute-0 systemd[1]: libpod-8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5.scope: Deactivated successfully.
Jan 20 19:26:26 compute-0 podman[244308]: 2026-01-20 19:26:26.2557249 +0000 UTC m=+0.422607036 container died 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle)
Jan 20 19:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-fbbab0e82d89cbc9cf644d02bfb1b938e76445f4918478cc5c9ce785b86760bf-merged.mount: Deactivated successfully.
Jan 20 19:26:26 compute-0 podman[244308]: 2026-01-20 19:26:26.292574992 +0000 UTC m=+0.459457128 container remove 8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=frosty_edison, ceph=True, org.label-schema.build-date=20251030, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=tentacle, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:26:26 compute-0 systemd[1]: libpod-conmon-8679b0add8369b7e10c9d7141c032887161733323bccd81b1a47da564a67efa5.scope: Deactivated successfully.
Jan 20 19:26:26 compute-0 sudo[244231]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:26 compute-0 sudo[244343]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:26:26 compute-0 sudo[244343]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:26 compute-0 sudo[244343]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:26 compute-0 sudo[244368]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:26:26 compute-0 sudo[244368]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:26 compute-0 podman[244406]: 2026-01-20 19:26:26.709773126 +0000 UTC m=+0.038876652 container create 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 20 19:26:26 compute-0 systemd[1]: Started libpod-conmon-589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00.scope.
Jan 20 19:26:26 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:26 compute-0 podman[244406]: 2026-01-20 19:26:26.776402038 +0000 UTC m=+0.105505574 container init 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle)
Jan 20 19:26:26 compute-0 podman[244406]: 2026-01-20 19:26:26.782390262 +0000 UTC m=+0.111493788 container start 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030)
Jan 20 19:26:26 compute-0 podman[244406]: 2026-01-20 19:26:26.785772455 +0000 UTC m=+0.114875981 container attach 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:26:26 compute-0 vibrant_cartwright[244422]: 167 167
Jan 20 19:26:26 compute-0 podman[244406]: 2026-01-20 19:26:26.692518018 +0000 UTC m=+0.021621564 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:26:26 compute-0 systemd[1]: libpod-589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00.scope: Deactivated successfully.
Jan 20 19:26:26 compute-0 podman[244406]: 2026-01-20 19:26:26.78766271 +0000 UTC m=+0.116766236 container died 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Jan 20 19:26:26 compute-0 systemd[1]: var-lib-containers-storage-overlay-4884514c1c9217a4593aa466dce71895e4a53665b02c159efe74d4706ac11087-merged.mount: Deactivated successfully.
Jan 20 19:26:26 compute-0 podman[244406]: 2026-01-20 19:26:26.823851716 +0000 UTC m=+0.152955242 container remove 589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=vibrant_cartwright, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:26:26 compute-0 systemd[1]: libpod-conmon-589194e426d2862cf078c1c80bf843aa30dad9c867bb9d0cad4565c780dd5d00.scope: Deactivated successfully.
Jan 20 19:26:27 compute-0 podman[244446]: 2026-01-20 19:26:27.013646088 +0000 UTC m=+0.053729631 container create 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, CEPH_REF=tentacle)
Jan 20 19:26:27 compute-0 systemd[1]: Started libpod-conmon-3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1.scope.
Jan 20 19:26:27 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:26:27 compute-0 podman[244446]: 2026-01-20 19:26:27.077149675 +0000 UTC m=+0.117233238 container init 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:26:27 compute-0 podman[244446]: 2026-01-20 19:26:26.985276211 +0000 UTC m=+0.025359854 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:26:27 compute-0 podman[244446]: 2026-01-20 19:26:27.084872442 +0000 UTC m=+0.124955985 container start 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:26:27 compute-0 podman[244446]: 2026-01-20 19:26:27.087783322 +0000 UTC m=+0.127866875 container attach 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2)
Jan 20 19:26:27 compute-0 lvm[244541]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:26:27 compute-0 lvm[244541]: VG ceph_vg0 finished
Jan 20 19:26:27 compute-0 lvm[244542]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:26:27 compute-0 lvm[244542]: VG ceph_vg1 finished
Jan 20 19:26:27 compute-0 lvm[244544]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:26:27 compute-0 lvm[244544]: VG ceph_vg2 finished
Jan 20 19:26:27 compute-0 sad_gagarin[244463]: {}
Jan 20 19:26:27 compute-0 ceph-mon[75120]: pgmap v830: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:27 compute-0 systemd[1]: libpod-3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1.scope: Deactivated successfully.
Jan 20 19:26:27 compute-0 systemd[1]: libpod-3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1.scope: Consumed 1.277s CPU time.
Jan 20 19:26:27 compute-0 podman[244446]: 2026-01-20 19:26:27.9060599 +0000 UTC m=+0.946143473 container died 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Jan 20 19:26:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-284a3b277e0c7a80fece7270c958e006947fc6c46c8481d45d25730eed25ff1a-merged.mount: Deactivated successfully.
Jan 20 19:26:27 compute-0 podman[244446]: 2026-01-20 19:26:27.951625043 +0000 UTC m=+0.991708586 container remove 3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=sad_gagarin, org.label-schema.license=GPLv2, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:26:27 compute-0 systemd[1]: libpod-conmon-3f06057c397232ed39bb2e8830a01873c30e640d011bf1695d2e41a072293cd1.scope: Deactivated successfully.
Jan 20 19:26:27 compute-0 sudo[244368]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:26:28 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:26:28 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:28 compute-0 sudo[244559]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:26:28 compute-0 sudo[244559]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:26:28 compute-0 sudo[244559]: pam_unix(sudo:session): session closed for user root
Jan 20 19:26:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:29 compute-0 ceph-mon[75120]: pgmap v831: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:26:30 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:31 compute-0 ceph-mon[75120]: pgmap v832: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:26:31
Jan 20 19:26:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:26:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:26:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', 'images', 'backups', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'volumes', '.rgw.root']
Jan 20 19:26:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:26:32 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:33 compute-0 ceph-mon[75120]: pgmap v833: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:26:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:26:35 compute-0 ceph-mon[75120]: pgmap v834: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:36 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:37 compute-0 ceph-mon[75120]: pgmap v835: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:38 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:39 compute-0 ceph-mon[75120]: pgmap v836: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:40 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:41 compute-0 ceph-mon[75120]: pgmap v837: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:42 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:43 compute-0 ceph-mon[75120]: pgmap v838: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:43 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] _maybe_adjust
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 1.375568233648222e-06 of space, bias 4.0, pg target 0.0016506818803778663 quantized to 16 (current 16)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 4.1969867161554995e-06 of space, bias 1.0, pg target 0.0012590960148466499 quantized to 32 (current 32)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Jan 20 19:26:44 compute-0 ceph-mgr[75417]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Jan 20 19:26:45 compute-0 ceph-mon[75120]: pgmap v839: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:46 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:47 compute-0 ceph-mon[75120]: pgmap v840: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:48 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:48 compute-0 podman[244584]: 2026-01-20 19:26:48.441549768 +0000 UTC m=+0.107851940 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:26:48 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Jan 20 19:26:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155780243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:26:49 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Jan 20 19:26:49 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/155780243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:26:49 compute-0 ceph-mon[75120]: pgmap v841: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:49 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/155780243' entity='client.openstack' cmd={"prefix":"df", "format":"json"} : dispatch
Jan 20 19:26:49 compute-0 ceph-mon[75120]: from='client.? 192.168.122.10:0/155780243' entity='client.openstack' cmd={"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} : dispatch
Jan 20 19:26:50 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:51 compute-0 podman[244611]: 2026-01-20 19:26:51.403894253 +0000 UTC m=+0.074336960 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 20 19:26:51 compute-0 ceph-mon[75120]: pgmap v842: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:52 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:53 compute-0 ceph-mon[75120]: pgmap v843: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:53 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:54 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:55 compute-0 ceph-mon[75120]: pgmap v844: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:56 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:57 compute-0 ceph-mon[75120]: pgmap v845: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:58 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:26:58 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:26:59 compute-0 ceph-mon[75120]: pgmap v846: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:00 compute-0 nova_compute[239038]: 2026-01-20 19:27:00.048 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:00 compute-0 nova_compute[239038]: 2026-01-20 19:27:00.049 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:00 compute-0 nova_compute[239038]: 2026-01-20 19:27:00.049 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:00 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:00 compute-0 nova_compute[239038]: 2026-01-20 19:27:00.678 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:00 compute-0 nova_compute[239038]: 2026-01-20 19:27:00.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:00 compute-0 nova_compute[239038]: 2026-01-20 19:27:00.682 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 20 19:27:00 compute-0 nova_compute[239038]: 2026-01-20 19:27:00.683 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 20 19:27:00 compute-0 nova_compute[239038]: 2026-01-20 19:27:00.821 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 20 19:27:01 compute-0 sshd-session[244631]: Accepted publickey for zuul from 192.168.122.10 port 58420 ssh2: ECDSA SHA256:/mbN/LbwW8xNom+4LcuAOoyrQQn10T3qWZE8cJZFLgE
Jan 20 19:27:01 compute-0 systemd-logind[797]: New session 52 of user zuul.
Jan 20 19:27:01 compute-0 systemd[1]: Started Session 52 of User zuul.
Jan 20 19:27:01 compute-0 sshd-session[244631]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 20 19:27:01 compute-0 sudo[244635]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 20 19:27:01 compute-0 sudo[244635]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 20 19:27:01 compute-0 nova_compute[239038]: 2026-01-20 19:27:01.682 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:01 compute-0 nova_compute[239038]: 2026-01-20 19:27:01.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:02 compute-0 ceph-mon[75120]: pgmap v847: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:02 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:02 compute-0 nova_compute[239038]: 2026-01-20 19:27:02.683 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:02 compute-0 nova_compute[239038]: 2026-01-20 19:27:02.720 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:27:02 compute-0 nova_compute[239038]: 2026-01-20 19:27:02.721 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:27:02 compute-0 nova_compute[239038]: 2026-01-20 19:27:02.721 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:27:02 compute-0 nova_compute[239038]: 2026-01-20 19:27:02.722 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 20 19:27:02 compute-0 nova_compute[239038]: 2026-01-20 19:27:02.722 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:27:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:27:03 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1704669262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:27:03 compute-0 nova_compute[239038]: 2026-01-20 19:27:03.265 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:27:03 compute-0 nova_compute[239038]: 2026-01-20 19:27:03.422 239044 WARNING nova.virt.libvirt.driver [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 20 19:27:03 compute-0 nova_compute[239038]: 2026-01-20 19:27:03.423 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 20 19:27:03 compute-0 nova_compute[239038]: 2026-01-20 19:27:03.423 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:27:03 compute-0 nova_compute[239038]: 2026-01-20 19:27:03.424 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:27:03 compute-0 nova_compute[239038]: 2026-01-20 19:27:03.533 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 20 19:27:03 compute-0 nova_compute[239038]: 2026-01-20 19:27:03.533 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 20 19:27:03 compute-0 nova_compute[239038]: 2026-01-20 19:27:03.595 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 20 19:27:03 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14392 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:03 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:27:04 compute-0 ceph-mon[75120]: pgmap v848: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:04 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1704669262' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:27:04 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Jan 20 19:27:04 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032183059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:27:04 compute-0 nova_compute[239038]: 2026-01-20 19:27:04.108 239044 DEBUG oslo_concurrency.processutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 20 19:27:04 compute-0 nova_compute[239038]: 2026-01-20 19:27:04.113 239044 DEBUG nova.compute.provider_tree [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed in ProviderTree for provider: 178956bf-6050-42b7-876f-3f96271cf4ff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 20 19:27:04 compute-0 nova_compute[239038]: 2026-01-20 19:27:04.139 239044 DEBUG nova.scheduler.client.report [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Inventory has not changed for provider 178956bf-6050-42b7-876f-3f96271cf4ff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 20 19:27:04 compute-0 nova_compute[239038]: 2026-01-20 19:27:04.141 239044 DEBUG nova.compute.resource_tracker [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 20 19:27:04 compute-0 nova_compute[239038]: 2026-01-20 19:27:04.142 239044 DEBUG oslo_concurrency.lockutils [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:27:04 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:04 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:04 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14396 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:05 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Jan 20 19:27:05 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/898834180' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 20 19:27:05 compute-0 ceph-mon[75120]: from='client.14392 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:05 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3032183059' entity='client.openstack' cmd={"prefix": "df", "format": "json"} : dispatch
Jan 20 19:27:05 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/898834180' entity='client.admin' cmd={"prefix": "status"} : dispatch
Jan 20 19:27:05 compute-0 nova_compute[239038]: 2026-01-20 19:27:05.142 239044 DEBUG oslo_service.periodic_task [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 20 19:27:05 compute-0 nova_compute[239038]: 2026-01-20 19:27:05.142 239044 DEBUG nova.compute.manager [None req-c8ca254e-2395-4410-8f61-6222fd156147 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 20 19:27:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:27:05.455 154796 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 20 19:27:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:27:05.455 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 20 19:27:05 compute-0 ovn_metadata_agent[154791]: 2026-01-20 19:27:05.455 154796 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 20 19:27:06 compute-0 ceph-mon[75120]: pgmap v849: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:06 compute-0 ceph-mon[75120]: from='client.14396 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:06 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:08 compute-0 ceph-mon[75120]: pgmap v850: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:08 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:08 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:27:09 compute-0 ceph-mon[75120]: pgmap v851: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:10 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:10 compute-0 ovs-vsctl[245006]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 20 19:27:11 compute-0 ceph-mon[75120]: pgmap v852: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:11 compute-0 virtqemud[238596]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 20 19:27:11 compute-0 virtqemud[238596]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 20 19:27:11 compute-0 virtqemud[238596]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 20 19:27:11 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: cache status {prefix=cache status} (starting...)
Jan 20 19:27:11 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: client ls {prefix=client ls} (starting...)
Jan 20 19:27:12 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:12 compute-0 lvm[245355]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:27:12 compute-0 lvm[245355]: VG ceph_vg0 finished
Jan 20 19:27:12 compute-0 lvm[245358]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:27:12 compute-0 lvm[245358]: VG ceph_vg2 finished
Jan 20 19:27:12 compute-0 lvm[245366]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:27:12 compute-0 lvm[245366]: VG ceph_vg1 finished
Jan 20 19:27:12 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14400 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:12 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: damage ls {prefix=damage ls} (starting...)
Jan 20 19:27:12 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14402 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:12 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump loads {prefix=dump loads} (starting...)
Jan 20 19:27:13 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Jan 20 19:27:13 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Jan 20 19:27:13 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14404 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:13 compute-0 ceph-mon[75120]: pgmap v853: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:13 compute-0 ceph-mon[75120]: from='client.14400 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:13 compute-0 ceph-mon[75120]: from='client.14402 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:13 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Jan 20 19:27:13 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Jan 20 19:27:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0)
Jan 20 19:27:13 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3430812925' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 20 19:27:13 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Jan 20 19:27:13 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14408 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:13 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: 2026-01-20T19:27:13.794+0000 7f97a9c36640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 19:27:13 compute-0 ceph-mgr[75417]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 19:27:13 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: get subtrees {prefix=get subtrees} (starting...)
Jan 20 19:27:13 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:27:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:27:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/961119628' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:27:14 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:14 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: ops {prefix=ops} (starting...)
Jan 20 19:27:14 compute-0 ceph-mon[75120]: from='client.14404 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:14 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3430812925' entity='client.admin' cmd={"prefix": "report"} : dispatch
Jan 20 19:27:14 compute-0 ceph-mon[75120]: from='client.14408 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:14 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/961119628' entity='client.admin' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:27:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Jan 20 19:27:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/751632128' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 20 19:27:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0)
Jan 20 19:27:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/761661735' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 20 19:27:14 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: session ls {prefix=session ls} (starting...)
Jan 20 19:27:14 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 20 19:27:14 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3770531288' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 20 19:27:15 compute-0 ceph-mds[95894]: mds.cephfs.compute-0.djcctc asok_command: status {prefix=status} (starting...)
Jan 20 19:27:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0)
Jan 20 19:27:15 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/717942052' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 20 19:27:15 compute-0 ceph-mon[75120]: pgmap v854: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:15 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/751632128' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm"} : dispatch
Jan 20 19:27:15 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/761661735' entity='client.admin' cmd={"prefix": "config log"} : dispatch
Jan 20 19:27:15 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3770531288' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 20 19:27:15 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/717942052' entity='client.admin' cmd={"prefix": "config-key dump"} : dispatch
Jan 20 19:27:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 20 19:27:15 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/715054790' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 20 19:27:15 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14422 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:15 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14426 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:15 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 20 19:27:15 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813597264' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 20 19:27:16 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:16 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/715054790' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 20 19:27:16 compute-0 ceph-mon[75120]: from='client.14422 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:16 compute-0 ceph-mon[75120]: from='client.14426 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:16 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1813597264' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 20 19:27:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0)
Jan 20 19:27:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1298277820' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 20 19:27:16 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 19:27:16 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1572877461' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 20 19:27:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Jan 20 19:27:17 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3199391415' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 20 19:27:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Jan 20 19:27:17 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1673053243' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 20 19:27:17 compute-0 ceph-mon[75120]: pgmap v855: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:17 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1298277820' entity='client.admin' cmd={"prefix": "features"} : dispatch
Jan 20 19:27:17 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1572877461' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 20 19:27:17 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3199391415' entity='client.admin' cmd={"prefix": "health", "detail": "detail"} : dispatch
Jan 20 19:27:17 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1673053243' entity='client.admin' cmd={"prefix": "mgr stat"} : dispatch
Jan 20 19:27:17 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14436 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:17 compute-0 ceph-mgr[75417]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:27:17 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: 2026-01-20T19:27:17.538+0000 7f97a9c36640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Jan 20 19:27:17 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 20 19:27:17 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2409765975' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 20 19:27:18 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Jan 20 19:27:18 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042601415' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 20 19:27:18 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:18 compute-0 ceph-mon[75120]: from='client.14436 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2409765975' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 20 19:27:18 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/4042601415' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} : dispatch
Jan 20 19:27:18 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14444 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 63963136 unmapped: 1040384 heap: 65003520 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 53 heartbeat osd_stat(store_statfs(0x4fe13d000/0x0/0x4ffc00000, data 0x42c2c/0x8d000, compress 0x0/0x0/0x0, omap 0x5f5b, meta 0x1a2a0a5), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:22.448573+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 53 handle_osd_map epochs [53,54], i have 53, src has [1,54]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 53 handle_osd_map epochs [53,54], i have 54, src has [1,54]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931865 2 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.989015 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931968 2 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931937 2 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988297 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988035 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003315 2 0.000053
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007934 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003038 2 0.000040
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007306 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.001881 2 0.000041
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007109 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932421 2 0.000026
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988290 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932518 2 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.987898 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932030 2 0.000023
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982515 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933010 2 0.000034
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.988595 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932195 2 0.000029
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931463 2 0.000035
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979825 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932392 2 0.000033
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983629 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983868 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933110 2 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986979 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.931819 2 0.001242
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985734 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932674 2 0.000383
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985556 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933097 2 0.000035
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933062 2 0.000034
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985067 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932051 2 0.000079
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982306 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003630 2 0.000758
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008051 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932650 2 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985482 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933739 2 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003716 2 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986836 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007929 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933760 2 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933163 2 0.000068
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.986617 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983442 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933440 2 0.000033
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.985215 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933205 2 0.000135
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983432 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933169 2 0.000089
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.984367 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.003645 2 0.000038
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007855 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933199 2 0.000027
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.983141 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933133 2 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981694 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933093 2 0.000034
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981930 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932293 2 0.001040
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.982586 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932640 2 0.000025
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.978172 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932985 2 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980701 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933206 2 0.000033
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981211 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932699 2 0.000035
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.981763 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932948 2 0.000034
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980305 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.932939 2 0.000030
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980591 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933183 2 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980110 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933304 2 0.000030
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980354 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004722 2 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933127 2 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008269 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979284 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933288 2 0.000029
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979539 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933337 2 0.000030
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.979890 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.933576 2 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.980242 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005009 2 0.000026
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007318 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.934966 2 0.000050
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.987205 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.935496 2 0.000028
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.935445 2 0.000075
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990042 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005302 2 0.000030
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007299 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990448 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.935686 2 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.991053 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006252 3 0.000166
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006355 3 0.000311
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006243 3 0.000086
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.18( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000074 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 0.998431 7 0.000118
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007501 3 0.001453
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 0.999121 7 0.000263
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 1.000466 7 0.000159
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 0.999243 7 0.000062
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009263 3 0.000181
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009489 4 0.000163
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1b( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009110 3 0.000155
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009373 3 0.000278
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.1d( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008629 3 0.000127
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008757 3 0.000058
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008794 3 0.000161
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008670 3 0.000218
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.8( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.15( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009234 3 0.000392
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.3( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009075 3 0.000092
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.12( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009082 3 0.000067
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.d( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009110 3 0.000077
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008781 3 0.000180
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008840 3 0.000144
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.8( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008854 3 0.000091
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.e( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008843 3 0.000204
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008860 3 0.000052
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008902 3 0.000062
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.2( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.008984 3 0.000516
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.5( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.001402 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.001528 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.7( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009831 3 0.000059
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009832 3 0.000064
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.9( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009739 3 0.000061
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009733 3 0.000057
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.e( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.5( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009822 3 0.000053
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.2( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009836 4 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009814 3 0.000057
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.a( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.8( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009799 3 0.000058
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009733 3 0.000047
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009757 3 0.000092
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.15( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.e( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009829 3 0.000038
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000002 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.a( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009703 3 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009748 3 0.000082
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.11( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009511 3 0.000056
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.11( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009505 3 0.000149
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1a( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009469 3 0.000062
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1c( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009474 3 0.000047
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.13( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009466 3 0.000130
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1f( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009414 3 0.000076
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000003 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.16( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 1.016080 7 0.000080
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] exit Started/Stray 1.003360 7 0.000129
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 39'18 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010099 3 0.000059
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009596 3 0.000070
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1b( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.1e( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009620 3 0.000321
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009571 3 0.000065
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009251 4 0.000070
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.11( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[4.1c( empty local-lis/les=53/54 n=0 ec=43/19 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009247 3 0.000298
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[7.1c( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009288 3 0.000322
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009135 3 0.000278
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[3.18( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [2] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.11( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.010437 3 0.001291
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[11.18( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [2] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.010002 4 0.000629
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 54 handle_osd_map epochs [54,54], i have 54, src has [1,54]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000682 1 0.000067
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.027700 7 0.000044
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037426 7 0.000054
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033234 7 0.000081
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036610 7 0.000077
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039181 7 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031779 7 0.000062
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029392 7 0.000161
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.029529 7 0.000256
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032488 7 0.000084
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032003 7 0.000091
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036221 7 0.000055
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032638 7 0.000304
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035516 7 0.000178
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030568 7 0.000122
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034830 7 0.000078
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032503 7 0.000277
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031496 7 0.000081
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034311 7 0.000173
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034501 7 0.000111
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036260 7 0.000144
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034468 7 0.005704
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040245 7 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030372 7 0.000326
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037972 7 0.000047
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034885 7 0.000551
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030449 7 0.000055
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036949 7 0.000094
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.043846 7 0.000082
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.041646 7 0.000061
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.043313 7 0.000062
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042737 7 0.000059
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042576 7 0.000061
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040934 7 0.000107
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040896 7 0.000051
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039113 7 0.000062
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040166 7 0.000111
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042453 7 0.000114
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033940 7 0.000061
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036484 7 0.000138
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.039106 7 0.000102
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036641 7 0.000275
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037257 7 0.000097
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036019 7 0.000057
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038111 7 0.000097
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.037206 7 0.000056
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038906 7 0.005878
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038419 7 0.000299
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038243 7 0.000144
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036511 7 0.000709
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036137 7 0.000068
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.044796 7 0.000240
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035189 7 0.000086
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035023 7 0.000091
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.040408 7 0.000062
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035849 7 0.000069
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.036095 7 0.000070
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.042259 7 0.003978
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.035587 7 0.000261
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034287 7 0.000521
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.074794 2 0.000082
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [2] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.058803 1 0.000090
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050990 1 0.000085
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049658 1 0.000061
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049533 1 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049550 1 0.000053
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049591 1 0.000059
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049635 1 0.000026
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049752 1 0.000141
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049735 1 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049801 1 0.000175
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049782 1 0.000027
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049825 1 0.000028
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049860 1 0.000025
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049873 1 0.000023
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049909 1 0.000057
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049958 1 0.000022
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049957 1 0.000037
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.049983 1 0.000020
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050032 1 0.000251
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050160 1 0.000124
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050086 1 0.000134
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050180 1 0.000038
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050222 1 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050263 1 0.000020
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050304 1 0.000112
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050339 1 0.000029
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.050369 1 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046484 1 0.000031
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046520 1 0.000028
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046533 1 0.000090
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046602 1 0.000017
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046578 1 0.000025
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046627 1 0.000025
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046689 1 0.000023
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046718 1 0.000029
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046800 1 0.000018
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046794 1 0.000024
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046863 1 0.000057
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046862 1 0.000023
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046931 1 0.000077
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047076 1 0.000362
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047017 1 0.000132
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047047 1 0.000172
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047106 1 0.000163
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047108 1 0.000281
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.046968 1 0.000022
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047001 1 0.000030
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047050 1 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047111 1 0.000059
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047134 1 0.000061
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047193 1 0.000021
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047228 1 0.000040
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047260 1 0.000035
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047314 1 0.000037
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047384 1 0.000208
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047441 1 0.000022
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047490 1 0.000025
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.047532 1 0.000132
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.045517 1 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.18( empty local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.007707 1 0.000143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.066638 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.14( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.094398 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.014757 1 0.000052
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.065808 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.16( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.103295 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.104083 2 0.000076
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 0.104119 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000282 1 0.000122
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022130 1 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.071839 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.8( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.105158 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.030044 1 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.079626 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1e( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.118845 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036896 1 0.000063
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.086514 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.2( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.118330 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044245 1 0.000041
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.093880 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.17( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.123412 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051605 1 0.000056
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.101287 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.130848 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65273856 unmapped: 778240 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 486703 data_alloc: 218103808 data_used: 1361
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058882 1 0.000058
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.108696 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.15( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.145346 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:23.448747+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.007978 1 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057826 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.3( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.090019 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.008118 1 0.000065
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057932 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.2( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.090462 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.007936 1 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057851 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1c( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.088502 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.007977 1 0.000038
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.057910 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.f( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.093502 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.008182 1 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.058004 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.5( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.090721 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012571 1 0.000034
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062493 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.4( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.095049 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012589 1 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012532 1 0.000052
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062534 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012493 1 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.012567 1 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.18( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.098858 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062604 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062503 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.19( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.097054 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.062595 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.4( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.097511 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.094140 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017658 1 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.067950 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.7( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.104252 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017559 1 0.000052
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.067869 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1e( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.108153 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017735 1 0.000042
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.067889 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.8( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.102585 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017801 1 0.000053
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.068117 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.7( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [0] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.102678 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.017995 1 0.000047
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.068257 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.16( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.098720 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019658 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069982 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019575 1 0.000050
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019528 1 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069933 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019649 1 0.000047
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069967 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.11( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.106936 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069985 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.105002 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.100460 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.13( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [0] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.108025 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.019599 1 0.000044
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.066134 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.17( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.110068 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023359 1 0.000052
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069962 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023334 1 0.000040
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.11( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.113375 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.069975 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.15( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.112745 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023476 1 0.000036
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.070042 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.1a( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.111739 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023410 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.070076 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.19( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.111114 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.023619 1 0.000064
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.070427 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.16( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.111377 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026443 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073169 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.d( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.112327 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026407 1 0.000062
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073176 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.6( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.113391 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026317 1 0.000059
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073157 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.5( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.109687 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026444 1 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073290 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.13( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.115782 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.026342 1 0.000038
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.073303 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.a( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.107299 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028187 1 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075104 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.4( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.111991 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028088 1 0.000055
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075305 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.12( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.117970 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028188 1 0.000044
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075182 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.c( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.114357 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028082 1 0.000051
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075191 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.028178 1 0.000042
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.075227 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.b( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.112554 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.9( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.111307 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.029594 1 0.000034
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.076729 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.7( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.114884 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.029573 1 0.000058
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.029452 1 0.000063
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.076740 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.076500 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.3( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.114112 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.f( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.114805 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 54 handle_osd_map epochs [54,55], i have 54, src has [1,55]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.029786 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.076817 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.f( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.115606 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039332 1 0.000056
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086502 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.2( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.122673 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039310 1 0.000032
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086489 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.10( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.131332 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039539 1 0.000055
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086653 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.6( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.123236 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039423 1 0.000035
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086678 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.19( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.121727 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.039503 1 0.000033
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.086724 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1a( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.121953 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.041908 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089440 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.1d( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.131748 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.042281 1 0.000056
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089645 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[2.1b( empty lb MIN local-lis/les=41/42 n=0 ec=41/16 lis/c=41/41 les/c/f=42/42/0 sis=53) [1] r=-1 lpr=53 pi=[41,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.125545 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.042414 1 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.089713 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[5.9( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.130160 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.042814 1 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.090415 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.11( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.129760 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.043008 1 0.000044
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.090481 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 54 pg[10.13( v 39'18 (0'0,39'18] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 crt=39'18 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.126609 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65175552 unmapped: 876544 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.1( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.096850 4 0.000054
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.1( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.144450 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.1( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.180285 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.18( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 1.096846 4 0.000050
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.18( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.142464 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[5.18( empty lb MIN local-lis/les=45/46 n=0 ec=45/21 lis/c=45/45 les/c/f=46/46/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.176808 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.e( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 1.085582 5 0.000162
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.e( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 1.085939 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.e( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.188582 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.316732 5 0.000059
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.316771 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.315560 5 0.000076
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.315611 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000134 1 0.000078
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000201 1 0.000122
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.046879 2 0.000326
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.047130 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.d( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.363088 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.048654 2 0.000228
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.048928 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.15( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.363859 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 55 heartbeat osd_stat(store_statfs(0x4fe137000/0x0/0x4ffc00000, data 0x44dcf/0x91000, compress 0x0/0x0/0x0, omap 0x61e6, meta 0x1a29e1a), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.511136 5 0.000092
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.511211 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000143 1 0.000142
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.029850 2 0.000375
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.030158 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.9( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=1 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [0] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.541925 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.675452 5 0.000051
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.675520 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000165 1 0.000131
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.680923 5 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ReplicaActive 1.680961 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000105 1 0.000089
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.010742 2 0.000248
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.011005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.14( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.689955 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete/Deleting 0.019805 2 0.000248
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started/ToDelete 0.019981 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 55 pg[10.12( v 50'19 (0'0,50'19] lb MIN local-lis/les=49/50 n=0 ec=49/35 lis/c=49/49 les/c/f=50/50/0 sis=53) [1] r=-1 lpr=53 pi=[49,53)/1 pct=0'0 crt=50'19 lcod 39'18 active mbc={}] exit Started 2.717065 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:24.448926+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 64962560 unmapped: 1089536 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:25.449080+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.192572594s of 10.370075226s, submitted: 647
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 64978944 unmapped: 1073152 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:26.449243+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 57 handle_osd_map epochs [57,58], i have 57, src has [1,58]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65044480 unmapped: 1007616 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:27.449418+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65052672 unmapped: 999424 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 405066 data_alloc: 218103808 data_used: 1361
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:28.449577+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:04:58.228744+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:04:58.239315+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 11)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:04:58.228744+0000 osd.2 (osd.2) 10 : cluster [DBG] 2.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:04:58.239315+0000 osd.2 (osd.2) 11 : cluster [DBG] 2.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:29.449790+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65101824 unmapped: 950272 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 58 heartbeat osd_stat(store_statfs(0x4fe12d000/0x0/0x4ffc00000, data 0x4be12/0x9d000, compress 0x0/0x0/0x0, omap 0x6c12, meta 0x1a293ee), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:30.449959+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65118208 unmapped: 933888 heap: 66052096 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:31.450153+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:01.281764+0000 osd.2 (osd.2) 12 : cluster [DBG] 5.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:01.292380+0000 osd.2 (osd.2) 13 : cluster [DBG] 5.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 58 handle_osd_map epochs [58,59], i have 58, src has [1,59]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 13)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:01.281764+0000 osd.2 (osd.2) 12 : cluster [DBG] 5.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:01.292380+0000 osd.2 (osd.2) 13 : cluster [DBG] 5.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:32.450678+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66166784 unmapped: 933888 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 411400 data_alloc: 218103808 data_used: 1873
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:33.450842+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:03.289440+0000 osd.2 (osd.2) 14 : cluster [DBG] 10.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:03.303437+0000 osd.2 (osd.2) 15 : cluster [DBG] 10.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 59 heartbeat osd_stat(store_statfs(0x4fe12a000/0x0/0x4ffc00000, data 0x4db4c/0xa0000, compress 0x0/0x0/0x0, omap 0x6e9d, meta 0x1a29163), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 15)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:03.289440+0000 osd.2 (osd.2) 14 : cluster [DBG] 10.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:03.303437+0000 osd.2 (osd.2) 15 : cluster [DBG] 10.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:34.451062+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66207744 unmapped: 892928 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:35.451217+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:05.289067+0000 osd.2 (osd.2) 16 : cluster [DBG] 5.10 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:05.299570+0000 osd.2 (osd.2) 17 : cluster [DBG] 5.10 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 17)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:05.289067+0000 osd.2 (osd.2) 16 : cluster [DBG] 5.10 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:05.299570+0000 osd.2 (osd.2) 17 : cluster [DBG] 5.10 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe122000/0x0/0x4ffc00000, data 0x512dd/0xa6000, compress 0x0/0x0/0x0, omap 0x73b3, meta 0x1a28c4d), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.423548698s of 10.460992813s, submitted: 13
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:36.451521+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:06.272904+0000 osd.2 (osd.2) 18 : cluster [DBG] 10.1d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:06.283433+0000 osd.2 (osd.2) 19 : cluster [DBG] 10.1d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 19)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:06.272904+0000 osd.2 (osd.2) 18 : cluster [DBG] 10.1d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:06.283433+0000 osd.2 (osd.2) 19 : cluster [DBG] 10.1d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:37.451762+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x512dd/0xa6000, compress 0x0/0x0/0x0, omap 0x73b3, meta 0x1a28c4d), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65978368 unmapped: 1122304 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 423595 data_alloc: 218103808 data_used: 2872
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:38.451988+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:08.303658+0000 osd.2 (osd.2) 20 : cluster [DBG] 10.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:08.314227+0000 osd.2 (osd.2) 21 : cluster [DBG] 10.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x512dd/0xa6000, compress 0x0/0x0/0x0, omap 0x73b3, meta 0x1a28c4d), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 21)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:08.303658+0000 osd.2 (osd.2) 20 : cluster [DBG] 10.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:08.314227+0000 osd.2 (osd.2) 21 : cluster [DBG] 10.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:39.452273+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 65994752 unmapped: 1105920 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 61 heartbeat osd_stat(store_statfs(0x4fe126000/0x0/0x4ffc00000, data 0x512dd/0xa6000, compress 0x0/0x0/0x0, omap 0x73b3, meta 0x1a28c4d), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:40.452431+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66011136 unmapped: 1089536 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.14 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.14 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:41.452580+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:11.271182+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.14 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:11.281706+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.14 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66060288 unmapped: 1040384 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 62 handle_osd_map epochs [63,64], i have 62, src has [1,64]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 23)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:11.271182+0000 osd.2 (osd.2) 22 : cluster [DBG] 2.14 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:11.281706+0000 osd.2 (osd.2) 23 : cluster [DBG] 2.14 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:42.452929+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:12.293287+0000 osd.2 (osd.2) 24 : cluster [DBG] 10.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:12.303912+0000 osd.2 (osd.2) 25 : cluster [DBG] 10.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66068480 unmapped: 1032192 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 440770 data_alloc: 218103808 data_used: 2872
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 64 handle_osd_map epochs [63,64], i have 64, src has [1,64]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000124 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000049
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000072 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000188 1 0.000186
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000056 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000319 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000125 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000151 1 0.000056
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000200 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000052 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000055 1 0.000034
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000112 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000137 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=0 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000334 1 0.000097
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000413 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 64 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 25)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:12.293287+0000 osd.2 (osd.2) 24 : cluster [DBG] 10.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:12.303912+0000 osd.2 (osd.2) 25 : cluster [DBG] 10.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 64 handle_osd_map epochs [64,65], i have 64, src has [1,65]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 64 handle_osd_map epochs [64,65], i have 65, src has [1,65]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.327224 2 0.000058
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.327495 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.327547 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.328313 2 0.000157
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.328682 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.328796 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000089 1 0.000110
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000262 1 0.000363
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000059 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.328006 2 0.000064
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.328257 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.328321 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.327025 2 0.000219
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.327491 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.327541 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000134 1 0.000464
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=0 lpr=64 pi=[49,64)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000169 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.6( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000246 1 0.000338
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000023 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 65 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 65 handle_osd_map epochs [65,65], i have 65, src has [1,65]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:43.453158+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:13.342696+0000 osd.2 (osd.2) 26 : cluster [DBG] 2.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:13.353144+0000 osd.2 (osd.2) 27 : cluster [DBG] 2.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66076672 unmapped: 1024000 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 65 handle_osd_map epochs [66,66], i have 65, src has [1,66]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 66 handle_osd_map epochs [62,66], i have 66, src has [1,66]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000091 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000016
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000159 1 0.000047
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000050 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000278 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=0 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000128 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=0 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000021
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000134 1 0.000053
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000202 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000195 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000033
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000129 1 0.000061
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000039 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000197 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=0 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000044
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000108 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000113 1 0.000245
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000091 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000254 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.055888 6 0.000055
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.054530 6 0.000286
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.055822 6 0.000186
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.054784 6 0.000131
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 27)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:13.342696+0000 osd.2 (osd.2) 26 : cluster [DBG] 2.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:13.353144+0000 osd.2 (osd.2) 27 : cluster [DBG] 2.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 39'299 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003650 3 0.000476
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 39'299 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 39'299 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000128 1 0.000069
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 lc 39'299 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.043858 1 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.048241 3 0.000204
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000289 1 0.000037
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.032815 1 0.000032
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 39'48 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.081460 3 0.000121
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 39'48 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 39'48 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000049 1 0.000044
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 lc 39'48 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:44.453396+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059666 1 0.000065
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 39'90 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.141263 3 0.000278
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 39'90 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 39'90 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000057 1 0.000034
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 lc 39'90 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.045710 1 0.000018
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 66297856 unmapped: 802816 heap: 67100672 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 66 heartbeat osd_stat(store_statfs(0x4fe114000/0x0/0x4ffc00000, data 0x582b7/0xb2000, compress 0x0/0x0/0x0, omap 0x7b54, meta 0x1a284ac), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 66 handle_osd_map epochs [66,67], i have 66, src has [1,67]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 66 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.858271 1 0.000027
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.999562 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.055501 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.918346 1 0.000028
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.999785 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.055723 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001094 2 0.000082
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001360 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000141 1 0.000192
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001397 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000087 1 0.000121
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=0 lpr=66 pi=[56,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000150 1 0.000224
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.17( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.002480 2 0.000162
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.813610 1 0.000026
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.002808 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.000815 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001624 2 0.000081
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.002915 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001856 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001889 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.055570 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000047 1 0.000171
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000211 1 0.000250
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.001430 2 0.000134
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.001721 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.001882 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=0 lpr=66 pi=[57,66)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.953263 1 0.000028
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.000990 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.055872 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000051 1 0.000071
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] r=-1 lpr=65 pi=[49,65)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000079 1 0.000110
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000083 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000804 1 0.000838
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002031 2 0.000071
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002145 2 0.000078
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000326 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.7( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001842 2 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001271 2 0.000082
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=20
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000024 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=20
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001269 2 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000524 2 0.000080
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001890 2 0.000135
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=13
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=13
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000978 2 0.000070
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: handle_auth_request added challenge on 0x5564ef2f6000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: handle_auth_request added challenge on 0x5564ef2f6400
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:45.453627+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: handle_auth_request added challenge on 0x5564ef2f6800
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67436544 unmapped: 1761280 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.942283630s of 10.073143959s, submitted: 77
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996403 2 0.000143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999954 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996606 2 0.000148
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000017 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997226 2 0.000138
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999696 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996347 2 0.000134
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.999313 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004035 3 0.000144
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.005327 6 0.000065
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.005030 6 0.000054
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.006557 6 0.000065
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004453 6 0.000980
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006257 3 0.000076
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006440 3 0.000331
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000032 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.007486 3 0.000198
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/49 les/c/f=68/50/0 sis=67) [2] r=0 lpr=67 pi=[49,67)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.006571 3 0.000164
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000047 1 0.000073
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 68'484 (0'0,68'484] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059291 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.065893 3 0.000233
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.f( v 68'484 (0'0,68'484] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000124 1 0.000064
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:46.453779+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:16.346115+0000 osd.2 (osd.2) 28 : cluster [DBG] 10.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:16.356663+0000 osd.2 (osd.2) 29 : cluster [DBG] 10.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 68'484 (0'0,68'484] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031668 1 0.000159
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.17( v 68'484 (0'0,68'484] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.098044 3 0.000247
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000101 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f6400 session 0x5564eea92a80
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f6800 session 0x5564eea93500
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f6000 session 0x5564eea92540
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.040397 1 0.000131
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 lc 39'49 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.138408 3 0.000119
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 lc 39'49 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 lc 39'49 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000118 1 0.000149
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 lc 39'49 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.053319 1 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67477504 unmapped: 1720320 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: handle_auth_request added challenge on 0x5564ef2f6c00
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: handle_auth_request added challenge on 0x5564ef2f7000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: handle_auth_request added challenge on 0x5564ee511c00
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f7000 session 0x5564ee4fae00
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ee511c00 session 0x5564ed84ba40
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 ms_handle_reset con 0x5564ef2f6c00 session 0x5564ee52c700
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 heartbeat osd_stat(store_statfs(0x4fe10a000/0x0/0x4ffc00000, data 0x5bf0e/0xbe000, compress 0x0/0x0/0x0, omap 0x806a, meta 0x1a27f96), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 29)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:16.346115+0000 osd.2 (osd.2) 28 : cluster [DBG] 10.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:16.356663+0000 osd.2 (osd.2) 29 : cluster [DBG] 10.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'486 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.861020 1 0.000050
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'486 active+remapped mbc={}] exit Started/ReplicaActive 1.053045 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'486 active+remapped mbc={}] exit Started 2.058343 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'486 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.914812 1 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.053562 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=68'486 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] exit Reset 0.000083 1 0.000137
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.987614 1 0.000093
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive 1.053672 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started 2.059043 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.058815 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[57,67)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=68'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000061 1 0.000075
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] exit Reset 0.000121 1 0.000157
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000158 1 0.000384
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.955952 1 0.000121
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started/ReplicaActive 1.053860 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] exit Started 2.060458 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] r=-1 lpr=67 pi=[56,67)/1 pct=0'0 crt=68'484 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 pct=0'0 crt=68'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] exit Reset 0.000073 1 0.000121
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000206 1 0.000233
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=0/0 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000043 1 0.000076
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000112 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000051 1 0.000791
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=20
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=23
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=20
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001811 3 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=23
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002150 3 0.000121
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001869 3 0.000050
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001265 3 0.000150
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:47.454014+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67633152 unmapped: 1564672 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 548195 data_alloc: 218103808 data_used: 4542
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 69 heartbeat osd_stat(store_statfs(0x4fe0fc000/0x0/0x4ffc00000, data 0x5fa86/0xce000, compress 0x0/0x0/0x0, omap 0x806a, meta 0x1a27f96), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.009922 2 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011906 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010350 2 0.000380
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012506 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010755 2 0.000109
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013051 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'486 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011077 2 0.000106
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012559 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/56 les/c/f=70/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002549 3 0.000291
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/56 les/c/f=70/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/56 les/c/f=70/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/56 les/c/f=70/57/0 sis=69) [2] r=0 lpr=69 pi=[56,69)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003291 3 0.000190
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003898 3 0.000198
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004415 3 0.000479
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=69/70 n=7 ec=49/33 lis/c=69/57 les/c/f=70/58/0 sis=69) [2] r=0 lpr=69 pi=[57,69)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:48.454146+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:18.323738+0000 osd.2 (osd.2) 30 : cluster [DBG] 2.10 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:18.334384+0000 osd.2 (osd.2) 31 : cluster [DBG] 2.10 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 70 handle_osd_map epochs [70,70], i have 70, src has [1,70]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1556480 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 31)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:18.323738+0000 osd.2 (osd.2) 30 : cluster [DBG] 2.10 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:18.334384+0000 osd.2 (osd.2) 31 : cluster [DBG] 2.10 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:49.454461+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:19.355952+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.17 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:19.366577+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.17 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67641344 unmapped: 1556480 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 33)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:19.355952+0000 osd.2 (osd.2) 32 : cluster [DBG] 5.17 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:19.366577+0000 osd.2 (osd.2) 33 : cluster [DBG] 5.17 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:50.454781+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67706880 unmapped: 1490944 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 70 heartbeat osd_stat(store_statfs(0x4fe0f7000/0x0/0x4ffc00000, data 0x614d5/0xd1000, compress 0x0/0x0/0x0, omap 0x806a, meta 0x1a27f96), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:51.455000+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:21.376572+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:21.387146+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 35)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:21.376572+0000 osd.2 (osd.2) 34 : cluster [DBG] 5.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:21.387146+0000 osd.2 (osd.2) 35 : cluster [DBG] 5.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 1482752 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:52.455253+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=0 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=0 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000021 1 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000099 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000136 1 0.000221
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001134 2 0.000086
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67731456 unmapped: 1466368 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 562242 data_alloc: 218103808 data_used: 4542
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=0 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000108 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=0 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000026
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000177 1 0.000050
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000042 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000254 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=0 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000062 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=0 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000013
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000100 1 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000169 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 71 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:53.455488+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:23.418614+0000 osd.2 (osd.2) 36 : cluster [DBG] 2.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:23.429190+0000 osd.2 (osd.2) 37 : cluster [DBG] 2.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 71 handle_osd_map epochs [71,72], i have 71, src has [1,72]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 71 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.961079 2 0.000126
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.962510 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.509527 2 0.000095
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.509838 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.509868 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000271 1 0.000372
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000052 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.8( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.510407 2 0.000082
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.510689 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.510742 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=0 lpr=71 pi=[49,71)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000535 1 0.000700
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000176 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[9.18( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=71/45 les/c/f=72/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004462 4 0.000357
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=71/45 les/c/f=72/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=71/45 les/c/f=72/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=71/72 n=1 ec=45/22 lis/c=71/45 les/c/f=72/47/0 sis=71) [2] r=0 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 72 handle_osd_map epochs [72,72], i have 72, src has [1,72]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 37)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:23.418614+0000 osd.2 (osd.2) 36 : cluster [DBG] 2.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:23.429190+0000 osd.2 (osd.2) 37 : cluster [DBG] 2.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1425408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:54.455821+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 72 handle_osd_map epochs [72,73], i have 73, src has [1,73]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=68'487 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.001896 6 0.000530
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=68'487 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=68'487 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.004182 6 0.000161
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 39'36 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003029 3 0.000193
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 39'36 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 39'36 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000066 1 0.000085
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 lc 39'36 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.046009 1 0.000064
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 39'53 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.048794 3 0.000413
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 39'53 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 39'53 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000073 1 0.000102
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 lc 39'53 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052550 1 0.000043
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67624960 unmapped: 1572864 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:55.456018+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.913020 1 0.000030
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.014550 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.018864 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.965877 1 0.000081
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive 1.015155 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000073 1 0.000107
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started 2.017325 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] r=-1 lpr=72 pi=[49,72)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000042
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000110 1 0.000159
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Start 0.000017 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000065
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001194 3 0.000132
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001445 3 0.000077
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67674112 unmapped: 1523712 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.867558479s of 10.107018471s, submitted: 142
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:56.456203+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 3 last_log 40 sent 37 num 3 unsent 3 sending 3
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:25.476766+0000 osd.2 (osd.2) 38 : cluster [DBG] 10.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:25.487332+0000 osd.2 (osd.2) 39 : cluster [DBG] 10.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:26.453140+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 74 handle_osd_map epochs [74,75], i have 74, src has [1,75]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 74 handle_osd_map epochs [74,75], i have 75, src has [1,75]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993522 2 0.000178
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995136 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.994486 2 0.000106
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.995916 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002418 3 0.000419
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=74/75 n=7 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002876 3 0.000203
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=74/75 n=6 ec=49/33 lis/c=74/49 les/c/f=75/50/0 sis=74) [2] r=0 lpr=74 pi=[49,74)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 40)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:25.476766+0000 osd.2 (osd.2) 38 : cluster [DBG] 10.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:25.487332+0000 osd.2 (osd.2) 39 : cluster [DBG] 10.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:26.453140+0000 osd.2 (osd.2) 40 : cluster [DBG] 5.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x686c7/0xe1000, compress 0x0/0x0/0x0, omap 0x9237, meta 0x1a26dc9), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 1482752 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:57.456487+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 1 last_log 41 sent 40 num 1 unsent 1 sending 1
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:26.463733+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 41)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:26.463733+0000 osd.2 (osd.2) 41 : cluster [DBG] 5.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67715072 unmapped: 1482752 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 606362 data_alloc: 218103808 data_used: 4542
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:58.456741+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:27.500325+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:27.511041+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 43)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:27.500325+0000 osd.2 (osd.2) 42 : cluster [DBG] 2.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:27.511041+0000 osd.2 (osd.2) 43 : cluster [DBG] 2.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1474560 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:59.457004+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67723264 unmapped: 1474560 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:00.457151+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:29.548099+0000 osd.2 (osd.2) 44 : cluster [DBG] 5.b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:29.558588+0000 osd.2 (osd.2) 45 : cluster [DBG] 5.b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 45)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:29.548099+0000 osd.2 (osd.2) 44 : cluster [DBG] 5.b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:29.558588+0000 osd.2 (osd.2) 45 : cluster [DBG] 5.b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 75 heartbeat osd_stat(store_statfs(0x4fe0e8000/0x0/0x4ffc00000, data 0x6a116/0xe4000, compress 0x0/0x0/0x0, omap 0x94c2, meta 0x1a26b3e), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1425408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:01.457488+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:30.527542+0000 osd.2 (osd.2) 46 : cluster [DBG] 10.3 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:30.541598+0000 osd.2 (osd.2) 47 : cluster [DBG] 10.3 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 75 handle_osd_map epochs [75,76], i have 75, src has [1,76]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 47)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:30.527542+0000 osd.2 (osd.2) 46 : cluster [DBG] 10.3 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:30.541598+0000 osd.2 (osd.2) 47 : cluster [DBG] 10.3 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67772416 unmapped: 1425408 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:02.457717+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:31.485583+0000 osd.2 (osd.2) 48 : cluster [DBG] 2.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:31.496076+0000 osd.2 (osd.2) 49 : cluster [DBG] 2.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 49)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:31.485583+0000 osd.2 (osd.2) 48 : cluster [DBG] 2.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:31.496076+0000 osd.2 (osd.2) 49 : cluster [DBG] 2.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 76 handle_osd_map epochs [77,77], i have 76, src has [1,77]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 1376256 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 621877 data_alloc: 218103808 data_used: 4542
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:03.457906+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 77 heartbeat osd_stat(store_statfs(0x4fe0de000/0x0/0x4ffc00000, data 0x6d89f/0xea000, compress 0x0/0x0/0x0, omap 0x99d8, meta 0x1a26628), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1368064 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:04.458023+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1368064 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:05.458176+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1318912 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:06.458381+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1318912 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:07.458524+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1310720 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 627885 data_alloc: 218103808 data_used: 4542
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:08.458652+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.020220757s of 12.071414948s, submitted: 21
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67887104 unmapped: 1310720 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:09.458778+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:38.524292+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:38.534846+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 79 heartbeat osd_stat(store_statfs(0x4fe0d6000/0x0/0x4ffc00000, data 0x71028/0xf0000, compress 0x0/0x0/0x0, omap 0x9eee, meta 0x1a26112), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 79 handle_osd_map epochs [79,80], i have 79, src has [1,80]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 51)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:38.524292+0000 osd.2 (osd.2) 50 : cluster [DBG] 5.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:38.534846+0000 osd.2 (osd.2) 51 : cluster [DBG] 5.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 80 heartbeat osd_stat(store_statfs(0x4fe0dc000/0x0/0x4ffc00000, data 0x71028/0xf0000, compress 0x0/0x0/0x0, omap 0x9eee, meta 0x1a26112), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=0 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000126 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=0 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000018 1 0.000037
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000132 1 0.000064
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000030 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000183 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=0 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000408 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=0 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000052 1 0.000075
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000134 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000126 1 0.000253
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000255 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000449 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 80 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1294336 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:10.459184+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:39.475227+0000 osd.2 (osd.2) 52 : cluster [DBG] 10.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:39.489421+0000 osd.2 (osd.2) 53 : cluster [DBG] 10.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 53)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:39.475227+0000 osd.2 (osd.2) 52 : cluster [DBG] 10.0 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:39.489421+0000 osd.2 (osd.2) 53 : cluster [DBG] 10.0 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.815960 2 0.000063
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.816189 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.816223 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000099 1 0.000150
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.816342 2 0.000344
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.816847 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.817023 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=0 lpr=80 pi=[49,80)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000056 1 0.000084
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000004 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 81 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 81 handle_osd_map epochs [81,81], i have 81, src has [1,81]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67928064 unmapped: 1269760 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 81 heartbeat osd_stat(store_statfs(0x4fe0d7000/0x0/0x4ffc00000, data 0x72d62/0xf3000, compress 0x0/0x0/0x0, omap 0xa179, meta 0x1a25e87), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:11.459398+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67944448 unmapped: 1253376 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:12.459512+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 81 handle_osd_map epochs [82,82], i have 81, src has [1,82]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.693610 5 0.000047
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.690950 5 0.000057
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 39'69 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003531 4 0.000130
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 39'69 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 39'69 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000064 1 0.000086
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 lc 39'69 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.035801 1 0.000058
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.039411 4 0.000285
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000059 1 0.000071
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.066930 1 0.000033
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 82 heartbeat osd_stat(store_statfs(0x4fe0d2000/0x0/0x4ffc00000, data 0x74815/0xf6000, compress 0x0/0x0/0x0, omap 0xa404, meta 0x1a25bfc), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.207877 1 0.000067
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive 0.314435 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started 2.005429 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.274948 1 0.000129
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.314554 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.008204 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] r=-1 lpr=81 pi=[49,81)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000126 1 0.000216
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000052 1 0.000057
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000561 1 0.000611
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Start 0.000199 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000114 1 0.000505
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=10
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000741 3 0.000079
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=10
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001722 3 0.000055
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67821568 unmapped: 1376256 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 668710 data_alloc: 218103808 data_used: 4542
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:13.459654+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 83 handle_osd_map epochs [83,84], i have 83, src has [1,84]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 83 handle_osd_map epochs [84,84], i have 84, src has [1,84]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.012510 2 0.000218
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014391 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.014021 2 0.000083
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.014977 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002833 4 0.000273
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=83/84 n=7 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002298 4 0.000148
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/49 les/c/f=84/50/0 sis=83) [2] r=0 lpr=83 pi=[49,83)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67829760 unmapped: 1368064 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:14.459768+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1327104 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:15.459904+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:44.531756+0000 osd.2 (osd.2) 54 : cluster [DBG] 2.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:44.542356+0000 osd.2 (osd.2) 55 : cluster [DBG] 2.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 84 handle_osd_map epochs [84,85], i have 85, src has [1,85]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 55)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:44.531756+0000 osd.2 (osd.2) 54 : cluster [DBG] 2.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:44.542356+0000 osd.2 (osd.2) 55 : cluster [DBG] 2.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67870720 unmapped: 1327104 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:16.460076+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:46.448672+0000 osd.2 (osd.2) 56 : cluster [DBG] 5.6 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:46.459243+0000 osd.2 (osd.2) 57 : cluster [DBG] 5.6 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 57)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:46.448672+0000 osd.2 (osd.2) 56 : cluster [DBG] 5.6 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:46.459243+0000 osd.2 (osd.2) 57 : cluster [DBG] 5.6 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67878912 unmapped: 1318912 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 85 handle_osd_map epochs [86,86], i have 85, src has [1,86]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:17.460214+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:47.449412+0000 osd.2 (osd.2) 58 : cluster [DBG] 10.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:47.459930+0000 osd.2 (osd.2) 59 : cluster [DBG] 10.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 86 handle_osd_map epochs [86,87], i have 86, src has [1,87]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 59)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:47.449412+0000 osd.2 (osd.2) 58 : cluster [DBG] 10.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:47.459930+0000 osd.2 (osd.2) 59 : cluster [DBG] 10.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1294336 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 686729 data_alloc: 218103808 data_used: 4794
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:18.460416+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 87 heartbeat osd_stat(store_statfs(0x4fe0bd000/0x0/0x4ffc00000, data 0x7edc3/0x10b000, compress 0x0/0x0/0x0, omap 0xb346, meta 0x1a24cba), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67903488 unmapped: 1294336 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:19.460608+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 87 handle_osd_map epochs [87,88], i have 87, src has [1,88]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.231353760s of 11.357475281s, submitted: 56
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 67911680 unmapped: 1286144 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 88 heartbeat osd_stat(store_statfs(0x4fe0bd000/0x0/0x4ffc00000, data 0x7edc3/0x10b000, compress 0x0/0x0/0x0, omap 0xb346, meta 0x1a24cba), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:20.460832+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 1 last_log 60 sent 59 num 1 unsent 1 sending 1
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:50.453619+0000 osd.2 (osd.2) 60 : cluster [DBG] 10.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 10.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=0 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000155 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=0 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000042 1 0.000079
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000129 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000277 1 0.000298
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.001117 2 0.000086
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 88 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 60)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:50.453619+0000 osd.2 (osd.2) 60 : cluster [DBG] 10.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 88 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.428262 2 0.000141
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.429802 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.002405 4 0.000347
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000191 1 0.000111
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68042752 unmapped: 1155072 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.126729 2 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=88/89 n=1 ec=45/22 lis/c=88/59 les/c/f=89/60/0 sis=88) [2] r=0 lpr=88 pi=[59,88)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:21.461120+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 3 last_log 63 sent 60 num 3 unsent 3 sending 3
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:50.464201+0000 osd.2 (osd.2) 61 : cluster [DBG] 10.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:51.433475+0000 osd.2 (osd.2) 62 : cluster [DBG] 5.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:51.443982+0000 osd.2 (osd.2) 63 : cluster [DBG] 5.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68059136 unmapped: 1138688 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:22.461414+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 5 last_log 65 sent 63 num 5 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:52.413412+0000 osd.2 (osd.2) 64 : cluster [DBG] 5.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:52.423903+0000 osd.2 (osd.2) 65 : cluster [DBG] 5.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 89 handle_osd_map epochs [89,90], i have 89, src has [1,90]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 63)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:50.464201+0000 osd.2 (osd.2) 61 : cluster [DBG] 10.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:51.433475+0000 osd.2 (osd.2) 62 : cluster [DBG] 5.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:51.443982+0000 osd.2 (osd.2) 63 : cluster [DBG] 5.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1130496 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 708622 data_alloc: 218103808 data_used: 4794
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:23.461693+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 65)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:52.413412+0000 osd.2 (osd.2) 64 : cluster [DBG] 5.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:52.423903+0000 osd.2 (osd.2) 65 : cluster [DBG] 5.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68067328 unmapped: 1130496 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:24.461853+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1122304 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 91 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0x85eea/0x119000, compress 0x0/0x0/0x0, omap 0xbec2, meta 0x1a2413e), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:25.461997+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68075520 unmapped: 1122304 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:26.462222+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68091904 unmapped: 1105920 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:27.462522+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:57.425457+0000 osd.2 (osd.2) 66 : cluster [DBG] 5.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:57.435397+0000 osd.2 (osd.2) 67 : cluster [DBG] 5.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 91 handle_osd_map epochs [92,93], i have 91, src has [1,93]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 67)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:57.425457+0000 osd.2 (osd.2) 66 : cluster [DBG] 5.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:57.435397+0000 osd.2 (osd.2) 67 : cluster [DBG] 5.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=0 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000128 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=0 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000037
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000211 1 0.000056
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000291 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 93 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 93 heartbeat osd_stat(store_statfs(0x4fe0ad000/0x0/0x4ffc00000, data 0x85eea/0x119000, compress 0x0/0x0/0x0, omap 0xbec2, meta 0x1a2413e), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 720441 data_alloc: 218103808 data_used: 4794
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:28.462695+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 1 last_log 68 sent 67 num 1 unsent 1 sending 1
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:58.461895+0000 osd.2 (osd.2) 68 : cluster [DBG] 5.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 93 handle_osd_map epochs [93,94], i have 93, src has [1,94]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 93 handle_osd_map epochs [93,94], i have 94, src has [1,94]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.783453 2 0.000092
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.783782 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.783808 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=0 lpr=93 pi=[56,93)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000107 1 0.000148
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 94 pg[9.13( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 94 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 68)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:58.461895+0000 osd.2 (osd.2) 68 : cluster [DBG] 5.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68149248 unmapped: 1048576 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:29.462931+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 3 last_log 71 sent 68 num 3 unsent 3 sending 3
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:58.472472+0000 osd.2 (osd.2) 69 : cluster [DBG] 5.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:59.434743+0000 osd.2 (osd.2) 70 : cluster [DBG] 7.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:05:59.445314+0000 osd.2 (osd.2) 71 : cluster [DBG] 7.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.008943 6 0.000053
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 39'131 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.005830 3 0.000257
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 39'131 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 39'131 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000217 1 0.000094
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 lc 39'131 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.057229 1 0.000083
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 71)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:58.472472+0000 osd.2 (osd.2) 69 : cluster [DBG] 5.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:59.434743+0000 osd.2 (osd.2) 70 : cluster [DBG] 7.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:05:59.445314+0000 osd.2 (osd.2) 71 : cluster [DBG] 7.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68321280 unmapped: 876544 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.224848747s of 10.569451332s, submitted: 87
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:30.463123+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:00.451680+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:00.462222+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 95 handle_osd_map epochs [96,96], i have 95, src has [1,96]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.943626 1 0.000081
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive 1.007072 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started 2.016063 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] r=-1 lpr=94 pi=[56,94)/1 pct=0'0 crt=68'485 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 pct=0'0 crt=68'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] exit Reset 0.000104 1 0.000145
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000040
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=16
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=16
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001291 3 0.000047
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 851968 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:31.463307+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68345856 unmapped: 851968 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 96 heartbeat osd_stat(store_statfs(0x4fcefe000/0x0/0x4ffc00000, data 0x8e7ca/0x12a000, compress 0x0/0x0/0x0, omap 0xc8ee, meta 0x2bc3712), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 73)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:00.451680+0000 osd.2 (osd.2) 72 : cluster [DBG] 4.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:00.462222+0000 osd.2 (osd.2) 73 : cluster [DBG] 4.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.796672 2 0.000068
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.798081 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=96/56 les/c/f=97/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001914 4 0.000159
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=96/56 les/c/f=97/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=96/56 les/c/f=97/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=96/97 n=6 ec=49/33 lis/c=96/56 les/c/f=97/57/0 sis=96) [2] r=0 lpr=96 pi=[56,96)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:32.463501+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68395008 unmapped: 802816 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 747833 data_alloc: 218103808 data_used: 4794
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 97 handle_osd_map epochs [97,98], i have 97, src has [1,98]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:33.463690+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 98 handle_osd_map epochs [98,99], i have 98, src has [1,99]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68411392 unmapped: 786432 heap: 69197824 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:34.463879+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 99 handle_osd_map epochs [100,100], i have 99, src has [1,100]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 48.139029 99 0.000466
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 48.146604 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 49.146655 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active mbc={}] exit Started 49.146724 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861665726s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 active pruub 157.961791992s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] exit Reset 0.000547 1 0.000698
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] exit Start 0.000116 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 100 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100 pruub=15.861179352s) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY pruub 157.961791992s@ mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68435968 unmapped: 1810432 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:35.464028+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.021002 3 0.000239
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.021199 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=-1 lpr=100 pi=[67,100)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000064 1 0.000097
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000035
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000023 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 101 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcef1000/0x0/0x4ffc00000, data 0x96e89/0x139000, compress 0x0/0x0/0x0, omap 0xd5a5, meta 0x2bc2a5b), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68509696 unmapped: 1736704 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 101 heartbeat osd_stat(store_statfs(0x4fcef1000/0x0/0x4ffc00000, data 0x96e89/0x139000, compress 0x0/0x0/0x0, omap 0xd5a5, meta 0x2bc2a5b), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:36.464181+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007044 4 0.000061
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.007150 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.004814 5 0.000275
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000670 1 0.000079
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.057140 2 0.000055
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68534272 unmapped: 1712128 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:37.464331+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 102 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.947837 1 0.000220
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active 1.010849 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary 2.018031 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started 2.018065 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] async=[0] r=0 lpr=101 pi=[67,101)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993875504s) [0] async=[0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 active pruub 160.133941650s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] exit Reset 0.000456 1 0.000567
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] exit Start 0.000093 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103 pruub=14.993478775s) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY pruub 160.133941650s@ mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 103 handle_osd_map epochs [103,103], i have 103, src has [1,103]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcee7000/0x0/0x4ffc00000, data 0x9a35d/0x13f000, compress 0x0/0x0/0x0, omap 0xdabb, meta 0x2bc2545), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68542464 unmapped: 1703936 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765529 data_alloc: 218103808 data_used: 4904
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 103 heartbeat osd_stat(store_statfs(0x4fcee7000/0x0/0x4ffc00000, data 0x9a35d/0x13f000, compress 0x0/0x0/0x0, omap 0xdabb, meta 0x2bc2545), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:38.464493+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.056840 6 0.000263
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000263 2 0.000065
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 DELETING pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.032254 2 0.000243
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete 0.032606 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=-1 lpr=103 pi=[67,103)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.089623 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1613824 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:39.464669+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1613824 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:40.464848+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.063548088s of 10.183242798s, submitted: 32
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1613824 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:41.465040+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:10.634861+0000 osd.2 (osd.2) 74 : cluster [DBG] 8.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:10.645473+0000 osd.2 (osd.2) 75 : cluster [DBG] 8.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 75)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:10.634861+0000 osd.2 (osd.2) 74 : cluster [DBG] 8.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:10.645473+0000 osd.2 (osd.2) 75 : cluster [DBG] 8.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 1613824 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:42.465257+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 68640768 unmapped: 1605632 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 762039 data_alloc: 218103808 data_used: 5054
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:43.465437+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 104 heartbeat osd_stat(store_statfs(0x4fcee9000/0x0/0x4ffc00000, data 0x9bd3e/0x141000, compress 0x0/0x0/0x0, omap 0xdd46, meta 0x2bc22ba), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 104 handle_osd_map epochs [105,105], i have 105, src has [1,105]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69697536 unmapped: 548864 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:44.465606+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 540672 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:45.465786+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 106 heartbeat osd_stat(store_statfs(0x4fcee3000/0x0/0x4ffc00000, data 0x9f476/0x147000, compress 0x0/0x0/0x0, omap 0xe25c, meta 0x2bc1da4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19(unlocked)] enter Initial
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=0 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000113 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=0 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000024
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000171 1 0.000058
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000035 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000225 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 107 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 516096 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:46.465976+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.923305 2 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.923582 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.923734 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=0 lpr=107 pi=[57,107)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000184 1 0.000368
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000029 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 108 pg[9.19( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 108 heartbeat osd_stat(store_statfs(0x4fcede000/0x0/0x4ffc00000, data 0xa1012/0x14a000, compress 0x0/0x0/0x0, omap 0xe4e7, meta 0x2bc1b19), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 507904 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:47.466138+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69672960 unmapped: 573440 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 775601 data_alloc: 218103808 data_used: 5639
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:48.466276+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.858556 5 0.000129
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 39'58 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004983 4 0.000192
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 39'58 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 39'58 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000109 1 0.000045
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 lc 39'58 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.074592 1 0.000051
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 109 handle_osd_map epochs [109,110], i have 110, src has [1,110]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.185366 1 0.000060
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive 0.265193 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started 2.123832 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] r=-1 lpr=108 pi=[57,108)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000233 1 0.000290
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Start 0.000017 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001443 2 0.000094
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Jan 20 19:27:18 compute-0 ceph-osd[88112]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000582 2 0.000125
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 401408 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:49.466503+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 110 handle_osd_map epochs [111,111], i have 110, src has [1,111]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 110 handle_osd_map epochs [111,111], i have 111, src has [1,111]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.013576 2 0.000089
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.015694 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=110/57 les/c/f=111/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002124 3 0.000192
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=110/57 les/c/f=111/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=110/57 les/c/f=111/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=110/111 n=6 ec=49/33 lis/c=110/57 les/c/f=111/58/0 sis=110) [2] r=0 lpr=110 pi=[57,110)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 376832 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:50.466642+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fced2000/0x0/0x4ffc00000, data 0xa6254/0x156000, compress 0x0/0x0/0x0, omap 0xec88, meta 0x2bc1378), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 368640 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:51.466783+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.734509468s of 10.946245193s, submitted: 42
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69705728 unmapped: 540672 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:52.466920+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:21.581733+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.1e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:21.592185+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.1e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 111 heartbeat osd_stat(store_statfs(0x4fced3000/0x0/0x4ffc00000, data 0xa7df8/0x159000, compress 0x0/0x0/0x0, omap 0xef13, meta 0x2bc10ed), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 38.961532 84 0.000369
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active 38.963950 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary 39.978968 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active mbc={}] exit Started 39.979246 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=83) [2] r=0 lpr=83 crt=68'487 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038787842s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 active pruub 169.396194458s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] exit Reset 0.000096 1 0.000188
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 112 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112 pruub=9.038736343s) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY pruub 169.396194458s@ mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 112 handle_osd_map epochs [112,112], i have 112, src has [1,112]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69722112 unmapped: 524288 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 802415 data_alloc: 218103808 data_used: 5639
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 77)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:21.581733+0000 osd.2 (osd.2) 76 : cluster [DBG] 3.1e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:21.592185+0000 osd.2 (osd.2) 77 : cluster [DBG] 3.1e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 112 handle_osd_map epochs [112,113], i have 112, src has [1,113]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:53.467076+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/Stray 0.694344 3 0.000052
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started 0.694413 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=-1 lpr=112 pi=[83,112)/1 crt=68'487 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] exit Reset 0.000303 1 0.000373
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] exit Start 0.000117 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000050 1 0.000287
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 113 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69730304 unmapped: 516096 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:54.467277+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 114 heartbeat osd_stat(store_statfs(0x4fcecb000/0x0/0x4ffc00000, data 0xab415/0x15f000, compress 0x0/0x0/0x0, omap 0xf429, meta 0x2bc0bd7), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.002405 4 0.000138
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.002610 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=83/84 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.005545 5 0.000338
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000090 1 0.000073
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000385 1 0.000031
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.063677 2 0.000046
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 507904 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:55.467468+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:24.660016+0000 osd.2 (osd.2) 78 : cluster [DBG] 4.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:24.670500+0000 osd.2 (osd.2) 79 : cluster [DBG] 4.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 79)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:24.660016+0000 osd.2 (osd.2) 78 : cluster [DBG] 4.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:24.670500+0000 osd.2 (osd.2) 79 : cluster [DBG] 4.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 114 handle_osd_map epochs [114,115], i have 114, src has [1,115]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.956725 1 0.000110
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started/Primary/Active 1.026769 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started/Primary 2.029428 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] exit Started 2.029658 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] async=[0] r=0 lpr=113 pi=[83,113)/1 crt=68'487 mlcod 68'487 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978566170s) [0] async=[0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 active pruub 178.060546875s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] exit Reset 0.000318 1 0.000459
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] exit Start 0.000053 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115 pruub=14.978322029s) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY pruub 178.060546875s@ mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 115 handle_osd_map epochs [115,115], i have 115, src has [1,115]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69738496 unmapped: 507904 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:56.467798+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:25.696097+0000 osd.2 (osd.2) 80 : cluster [DBG] 4.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:25.706607+0000 osd.2 (osd.2) 81 : cluster [DBG] 4.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 81)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:25.696097+0000 osd.2 (osd.2) 80 : cluster [DBG] 4.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:25.706607+0000 osd.2 (osd.2) 81 : cluster [DBG] 4.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/Stray 1.171535 6 0.000265
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active+clean] exit Started/Primary/Active/Clean 70.295951 149 0.000525
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started/Primary/Active 70.302267 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started/Primary 71.301602 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started 71.301721 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=67) [2] r=0 lpr=67 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705393791s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 active pruub 173.962097168s@ mbc={}] PeeringState::start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] exit Reset 0.000113 1 0.000174
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116 pruub=9.705332756s) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 173.962097168s@ mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003453 2 0.000076
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 DELETING pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069232 2 0.000235
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started/ToDelete 0.072744 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=-1 lpr=115 pi=[83,115)/1 crt=68'487 unknown NOTIFY mbc={}] exit Started 1.244403 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69828608 unmapped: 417792 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:57.467970+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started/Stray 0.855783 3 0.000075
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started 0.855852 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=-1 lpr=116 pi=[67,116)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] exit Reset 0.000098 1 0.000137
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000961 2 0.000053
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000057 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 117 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb1eb9/0x168000, compress 0x0/0x0/0x0, omap 0xfe55, meta 0x2bc01ab), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69844992 unmapped: 401408 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 805930 data_alloc: 218103808 data_used: 5639
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:58.468197+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 117 heartbeat osd_stat(store_statfs(0x4fcec0000/0x0/0x4ffc00000, data 0xb1eb9/0x168000, compress 0x0/0x0/0x0, omap 0xfe55, meta 0x2bc01ab), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 117 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 70.088814 149 0.000618
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 70.092226 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 71.104840 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active mbc={}] exit Started 71.105520 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=69) [2] r=0 lpr=69 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005606 3 0.000129
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.006729 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912906647s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 active pruub 176.032379150s@ mbc={}] PeeringState::start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] exit Reset 0.000176 1 0.000233
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118 pruub=9.912783623s) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY pruub 176.032379150s@ mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 385024 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 118 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.919067 5 0.000393
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000119 1 0.000115
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000314 1 0.000039
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:59.468384+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.042534 2 0.000050
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 118 handle_osd_map epochs [119,119], i have 118, src has [1,119]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.017657 3 0.000064
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.017714 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=-1 lpr=118 pi=[69,118)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.055496 1 0.000142
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active 1.017870 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary 2.024619 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started 2.024656 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] async=[0] r=0 lpr=117 pi=[67,117)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901124954s) [0] async=[0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 active pruub 183.038589478s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000164 1 0.000205
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] exit Reset 0.000158 1 0.000220
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119 pruub=15.901021957s) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 183.038589478s@ mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.006916 2 0.000066
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 119 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.1d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 303104 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:00.468529+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:29.562970+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.1d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:29.573600+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.1d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.008684 3 0.000095
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.015748 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=69/70 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 83)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:29.562970+0000 osd.2 (osd.2) 82 : cluster [DBG] 3.1d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:29.573600+0000 osd.2 (osd.2) 83 : cluster [DBG] 3.1d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/Stray 1.019690 7 0.000117
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000103 1 0.000051
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] lb MIN local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 DELETING pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.047284 2 0.000196
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] lb MIN local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete 0.047459 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] lb MIN local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=-1 lpr=119 pi=[67,119)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started 1.067214 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.266440 5 0.000324
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000127 1 0.000106
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000417 1 0.000037
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035804 2 0.000063
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 335872 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:01.468701+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:30.536845+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:30.547415+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.895073891s of 10.020702362s, submitted: 59
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.738970 1 0.000151
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active 1.042006 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary 2.057773 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started 2.057801 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] async=[1] r=0 lpr=119 pi=[69,119)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224273682s) [1] async=[1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 active pruub 184.419616699s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] exit Reset 0.000125 1 0.000169
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] enter Started
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] enter Start
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121 pruub=15.224187851s) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY pruub 184.419616699s@ mbc={}] enter Started/Stray
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 85)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:30.536845+0000 osd.2 (osd.2) 84 : cluster [DBG] 7.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:30.547415+0000 osd.2 (osd.2) 85 : cluster [DBG] 7.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 121 heartbeat osd_stat(store_statfs(0x4fceb8000/0x0/0x4ffc00000, data 0xb6eec/0x170000, compress 0x0/0x0/0x0, omap 0x105f6, meta 0x2bbfa0a), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 319488 heap: 70246400 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:02.468897+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:31.574303+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:31.584914+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 87)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:31.574303+0000 osd.2 (osd.2) 86 : cluster [DBG] 3.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:31.584914+0000 osd.2 (osd.2) 87 : cluster [DBG] 3.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.011468 7 0.000108
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000071 1 0.000048
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 DELETING pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039298 2 0.000177
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039419 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=-1 lpr=121 pi=[69,121)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.050932 0 0.000000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb8950/0x173000, compress 0x0/0x0/0x0, omap 0x10881, meta 0x2bbf77f), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1433600 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 806240 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:03.469102+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69861376 unmapped: 1433600 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _renew_subs
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:04.469241+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1425408 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:05.469396+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69869568 unmapped: 1425408 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:06.469569+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69877760 unmapped: 1417216 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:07.469694+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:37.454949+0000 osd.2 (osd.2) 88 : cluster [DBG] 11.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:37.465471+0000 osd.2 (osd.2) 89 : cluster [DBG] 11.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 89)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:37.454949+0000 osd.2 (osd.2) 88 : cluster [DBG] 11.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:37.465471+0000 osd.2 (osd.2) 89 : cluster [DBG] 11.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1400832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 807935 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:08.469939+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69894144 unmapped: 1400832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:09.470137+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1392640 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:10.470275+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69902336 unmapped: 1392640 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:11.470412+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:41.404158+0000 osd.2 (osd.2) 90 : cluster [DBG] 11.3 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:41.414479+0000 osd.2 (osd.2) 91 : cluster [DBG] 11.3 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 91)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:41.404158+0000 osd.2 (osd.2) 90 : cluster [DBG] 11.3 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:41.414479+0000 osd.2 (osd.2) 91 : cluster [DBG] 11.3 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69910528 unmapped: 1384448 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.777606964s of 10.797446251s, submitted: 11
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:12.470650+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:42.399946+0000 osd.2 (osd.2) 92 : cluster [DBG] 11.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:42.411353+0000 osd.2 (osd.2) 93 : cluster [DBG] 11.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69926912 unmapped: 1368064 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812763 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:13.470832+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 93)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:42.399946+0000 osd.2 (osd.2) 92 : cluster [DBG] 11.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:42.411353+0000 osd.2 (osd.2) 93 : cluster [DBG] 11.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1359872 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:14.470958+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69935104 unmapped: 1359872 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:15.471137+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1351680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:16.471326+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:46.379788+0000 osd.2 (osd.2) 94 : cluster [DBG] 11.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:46.390374+0000 osd.2 (osd.2) 95 : cluster [DBG] 11.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 95)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:46.379788+0000 osd.2 (osd.2) 94 : cluster [DBG] 11.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:46.390374+0000 osd.2 (osd.2) 95 : cluster [DBG] 11.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1351680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:17.471536+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69943296 unmapped: 1351680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 815176 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:18.471682+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1343488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:19.471875+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69951488 unmapped: 1343488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:20.472036+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1335296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:21.472252+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1335296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.021923065s of 10.029530525s, submitted: 4
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:22.472423+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:52.429451+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:52.439995+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 97)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:52.429451+0000 osd.2 (osd.2) 96 : cluster [DBG] 7.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:52.439995+0000 osd.2 (osd.2) 97 : cluster [DBG] 7.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69959680 unmapped: 1335296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 817587 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:23.472620+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1327104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:24.472751+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1327104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:25.472933+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69967872 unmapped: 1327104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:26.473099+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1318912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:27.473246+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:57.454931+0000 osd.2 (osd.2) 98 : cluster [DBG] 11.b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:57.465575+0000 osd.2 (osd.2) 99 : cluster [DBG] 11.b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 99)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:57.454931+0000 osd.2 (osd.2) 98 : cluster [DBG] 11.b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:57.465575+0000 osd.2 (osd.2) 99 : cluster [DBG] 11.b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69976064 unmapped: 1318912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 820000 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:28.473461+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69984256 unmapped: 1310720 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:29.473614+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:59.390329+0000 osd.2 (osd.2) 100 : cluster [DBG] 11.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:06:59.400884+0000 osd.2 (osd.2) 101 : cluster [DBG] 11.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 101)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:59.390329+0000 osd.2 (osd.2) 100 : cluster [DBG] 11.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:06:59.400884+0000 osd.2 (osd.2) 101 : cluster [DBG] 11.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 69992448 unmapped: 1302528 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:30.473855+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1294336 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:31.474024+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:01.400133+0000 osd.2 (osd.2) 102 : cluster [DBG] 4.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:01.410678+0000 osd.2 (osd.2) 103 : cluster [DBG] 4.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 103)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:01.400133+0000 osd.2 (osd.2) 102 : cluster [DBG] 4.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:01.410678+0000 osd.2 (osd.2) 103 : cluster [DBG] 4.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70000640 unmapped: 1294336 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:32.474226+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1286144 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 824824 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:33.474383+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70008832 unmapped: 1286144 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:34.474529+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70041600 unmapped: 1253376 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.897191048s of 12.921946526s, submitted: 8
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:35.474749+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:05.351401+0000 osd.2 (osd.2) 104 : cluster [DBG] 8.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:05.361881+0000 osd.2 (osd.2) 105 : cluster [DBG] 8.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 105)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:05.351401+0000 osd.2 (osd.2) 104 : cluster [DBG] 8.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:05.361881+0000 osd.2 (osd.2) 105 : cluster [DBG] 8.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1236992 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:36.475431+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70057984 unmapped: 1236992 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:37.475593+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70066176 unmapped: 1228800 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 827235 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:38.475816+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:08.283158+0000 osd.2 (osd.2) 106 : cluster [DBG] 4.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:08.293550+0000 osd.2 (osd.2) 107 : cluster [DBG] 4.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 107)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:08.283158+0000 osd.2 (osd.2) 106 : cluster [DBG] 4.1 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:08.293550+0000 osd.2 (osd.2) 107 : cluster [DBG] 4.1 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1204224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:39.476150+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:09.242815+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:09.253376+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 109)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:09.242815+0000 osd.2 (osd.2) 108 : cluster [DBG] 7.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:09.253376+0000 osd.2 (osd.2) 109 : cluster [DBG] 7.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70090752 unmapped: 1204224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:40.476339+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:10.229797+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:10.240344+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 111)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:10.229797+0000 osd.2 (osd.2) 110 : cluster [DBG] 3.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:10.240344+0000 osd.2 (osd.2) 111 : cluster [DBG] 3.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1196032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:41.476582+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70098944 unmapped: 1196032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:42.476827+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1187840 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 836881 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:43.476982+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:13.187714+0000 osd.2 (osd.2) 112 : cluster [DBG] 8.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:13.198289+0000 osd.2 (osd.2) 113 : cluster [DBG] 8.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 113)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:13.187714+0000 osd.2 (osd.2) 112 : cluster [DBG] 8.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:13.198289+0000 osd.2 (osd.2) 113 : cluster [DBG] 8.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70107136 unmapped: 1187840 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:44.477212+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:14.154233+0000 osd.2 (osd.2) 114 : cluster [DBG] 11.9 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:14.168405+0000 osd.2 (osd.2) 115 : cluster [DBG] 11.9 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 115)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:14.154233+0000 osd.2 (osd.2) 114 : cluster [DBG] 11.9 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:14.168405+0000 osd.2 (osd.2) 115 : cluster [DBG] 11.9 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1171456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:45.477457+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1171456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:46.477624+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70123520 unmapped: 1171456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:47.477772+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.605541229s of 12.806389809s, submitted: 12
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1163264 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841705 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:48.477924+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:18.157878+0000 osd.2 (osd.2) 116 : cluster [DBG] 8.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:18.168346+0000 osd.2 (osd.2) 117 : cluster [DBG] 8.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 117)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:18.157878+0000 osd.2 (osd.2) 116 : cluster [DBG] 8.d scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:18.168346+0000 osd.2 (osd.2) 117 : cluster [DBG] 8.d scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70131712 unmapped: 1163264 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:49.478139+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1155072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:50.478290+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70139904 unmapped: 1155072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:51.478439+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1146880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:52.478556+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1146880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 841705 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:53.478742+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1146880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:54.478888+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70148096 unmapped: 1146880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:55.479014+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:25.002854+0000 osd.2 (osd.2) 118 : cluster [DBG] 7.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:25.013413+0000 osd.2 (osd.2) 119 : cluster [DBG] 7.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 119)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:25.002854+0000 osd.2 (osd.2) 118 : cluster [DBG] 7.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:25.013413+0000 osd.2 (osd.2) 119 : cluster [DBG] 7.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1138688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:56.479195+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70156288 unmapped: 1138688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:57.479394+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1130496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844116 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:58.479550+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70164480 unmapped: 1130496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:59.479725+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1122304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:00.479907+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1122304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:01.480044+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.808049202s of 13.814610481s, submitted: 4
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70172672 unmapped: 1122304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:02.480198+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:31.972485+0000 osd.2 (osd.2) 120 : cluster [DBG] 7.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:31.983067+0000 osd.2 (osd.2) 121 : cluster [DBG] 7.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 121)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:31.972485+0000 osd.2 (osd.2) 120 : cluster [DBG] 7.5 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:31.983067+0000 osd.2 (osd.2) 121 : cluster [DBG] 7.5 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70189056 unmapped: 1105920 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 848940 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:03.480414+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:32.931014+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:32.941541+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 123)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:32.931014+0000 osd.2 (osd.2) 122 : cluster [DBG] 11.2 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:32.941541+0000 osd.2 (osd.2) 123 : cluster [DBG] 11.2 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1097728 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:04.480603+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70197248 unmapped: 1097728 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:05.480817+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1089536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:06.481027+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:35.931436+0000 osd.2 (osd.2) 124 : cluster [DBG] 4.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:35.941999+0000 osd.2 (osd.2) 125 : cluster [DBG] 4.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 125)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:35.931436+0000 osd.2 (osd.2) 124 : cluster [DBG] 4.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:35.941999+0000 osd.2 (osd.2) 125 : cluster [DBG] 4.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70205440 unmapped: 1089536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:07.481260+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:36.946048+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:36.956615+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 127)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:36.946048+0000 osd.2 (osd.2) 126 : cluster [DBG] 7.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:36.956615+0000 osd.2 (osd.2) 127 : cluster [DBG] 7.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1081344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 853762 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:08.481511+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1081344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:09.481688+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70213632 unmapped: 1081344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:10.481853+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70230016 unmapped: 1064960 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:11.482030+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:40.882166+0000 osd.2 (osd.2) 128 : cluster [DBG] 8.4 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:40.892485+0000 osd.2 (osd.2) 129 : cluster [DBG] 8.4 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 129)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:40.882166+0000 osd.2 (osd.2) 128 : cluster [DBG] 8.4 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:40.892485+0000 osd.2 (osd.2) 129 : cluster [DBG] 8.4 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1056768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:12.482265+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:41.837617+0000 osd.2 (osd.2) 130 : cluster [DBG] 7.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:41.848146+0000 osd.2 (osd.2) 131 : cluster [DBG] 7.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 131)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:41.837617+0000 osd.2 (osd.2) 130 : cluster [DBG] 7.15 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:41.848146+0000 osd.2 (osd.2) 131 : cluster [DBG] 7.15 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1056768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 858586 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:13.482480+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70238208 unmapped: 1056768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:14.482653+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 1032192 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:15.482791+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.827822685s of 13.856680870s, submitted: 12
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70262784 unmapped: 1032192 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:16.482988+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:45.829197+0000 osd.2 (osd.2) 132 : cluster [DBG] 3.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:45.839824+0000 osd.2 (osd.2) 133 : cluster [DBG] 3.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 133)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:45.829197+0000 osd.2 (osd.2) 132 : cluster [DBG] 3.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:45.839824+0000 osd.2 (osd.2) 133 : cluster [DBG] 3.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 1024000 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:17.483216+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70270976 unmapped: 1024000 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860997 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:18.483348+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 1007616 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:19.483526+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 1007616 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:20.484155+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70287360 unmapped: 1007616 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:21.484311+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 991232 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:22.484498+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:51.876035+0000 osd.2 (osd.2) 134 : cluster [DBG] 7.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:51.886638+0000 osd.2 (osd.2) 135 : cluster [DBG] 7.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 135)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:51.876035+0000 osd.2 (osd.2) 134 : cluster [DBG] 7.a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:51.886638+0000 osd.2 (osd.2) 135 : cluster [DBG] 7.a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70303744 unmapped: 991232 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 863408 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:23.485084+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70311936 unmapped: 983040 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:24.485237+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:53.910221+0000 osd.2 (osd.2) 136 : cluster [DBG] 3.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:53.920784+0000 osd.2 (osd.2) 137 : cluster [DBG] 3.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 137)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:53.910221+0000 osd.2 (osd.2) 136 : cluster [DBG] 3.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:53.920784+0000 osd.2 (osd.2) 137 : cluster [DBG] 3.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 974848 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:25.485487+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70320128 unmapped: 974848 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:26.485655+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70328320 unmapped: 966656 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:27.485763+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 950272 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 865821 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:28.485903+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 950272 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:29.486039+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.025310516s of 14.034622192s, submitted: 6
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70344704 unmapped: 950272 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:30.486230+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:59.863896+0000 osd.2 (osd.2) 138 : cluster [DBG] 8.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:07:59.874671+0000 osd.2 (osd.2) 139 : cluster [DBG] 8.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 139)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:59.863896+0000 osd.2 (osd.2) 138 : cluster [DBG] 8.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:07:59.874671+0000 osd.2 (osd.2) 139 : cluster [DBG] 8.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 942080 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:31.486448+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70352896 unmapped: 942080 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:32.486607+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:01.850012+0000 osd.2 (osd.2) 140 : cluster [DBG] 7.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:01.860570+0000 osd.2 (osd.2) 141 : cluster [DBG] 7.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 141)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:01.850012+0000 osd.2 (osd.2) 140 : cluster [DBG] 7.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:01.860570+0000 osd.2 (osd.2) 141 : cluster [DBG] 7.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70369280 unmapped: 925696 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 873062 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:33.486873+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:02.823249+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:02.833722+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 143)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:02.823249+0000 osd.2 (osd.2) 142 : cluster [DBG] 11.1a scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:02.833722+0000 osd.2 (osd.2) 143 : cluster [DBG] 11.1a scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 917504 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:34.487082+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70377472 unmapped: 917504 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:35.488714+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 909312 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:36.490556+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 909312 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:37.493094+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70385664 unmapped: 909312 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 873062 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:38.494625+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 901120 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:39.495271+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:40.495882+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70393856 unmapped: 901120 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.012310982s of 11.024168015s, submitted: 6
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:41.496034+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:10.888051+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:10.898593+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 876544 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 145)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:10.888051+0000 osd.2 (osd.2) 144 : cluster [DBG] 11.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:10.898593+0000 osd.2 (osd.2) 145 : cluster [DBG] 11.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:42.497049+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70418432 unmapped: 876544 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:43.497885+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 868352 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 875477 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:44.498035+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:13.834193+0000 osd.2 (osd.2) 146 : cluster [DBG] 4.13 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:13.844730+0000 osd.2 (osd.2) 147 : cluster [DBG] 4.13 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70426624 unmapped: 868352 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 147)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:13.834193+0000 osd.2 (osd.2) 146 : cluster [DBG] 4.13 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:13.844730+0000 osd.2 (osd.2) 147 : cluster [DBG] 4.13 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:45.498684+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 860160 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:46.498861+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 860160 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:47.499441+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70434816 unmapped: 860160 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:48.499966+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 851968 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877890 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:49.500179+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 851968 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:50.500383+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70443008 unmapped: 851968 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:51.500597+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:20.812931+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:20.823496+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 835584 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.895845413s of 10.908414841s, submitted: 6
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 149)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:20.812931+0000 osd.2 (osd.2) 148 : cluster [DBG] 11.1f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:20.823496+0000 osd.2 (osd.2) 149 : cluster [DBG] 11.1f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:52.500821+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:21.796548+0000 osd.2 (osd.2) 150 : cluster [DBG] 3.16 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:21.807183+0000 osd.2 (osd.2) 151 : cluster [DBG] 3.16 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 835584 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 151)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:21.796548+0000 osd.2 (osd.2) 150 : cluster [DBG] 3.16 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:21.807183+0000 osd.2 (osd.2) 151 : cluster [DBG] 3.16 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:53.501141+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70459392 unmapped: 835584 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 882718 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:54.501315+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:23.795920+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:23.806472+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 827392 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 153)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:23.795920+0000 osd.2 (osd.2) 152 : cluster [DBG] 11.1e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:23.806472+0000 osd.2 (osd.2) 153 : cluster [DBG] 11.1e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:55.501576+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:24.836553+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:24.847222+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 827392 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 155)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:24.836553+0000 osd.2 (osd.2) 154 : cluster [DBG] 11.1b scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:24.847222+0000 osd.2 (osd.2) 155 : cluster [DBG] 11.1b scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:56.502084+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70467584 unmapped: 827392 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:57.502253+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:26.802660+0000 osd.2 (osd.2) 156 : cluster [DBG] 8.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:26.813194+0000 osd.2 (osd.2) 157 : cluster [DBG] 8.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 157)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:26.802660+0000 osd.2 (osd.2) 156 : cluster [DBG] 8.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:26.813194+0000 osd.2 (osd.2) 157 : cluster [DBG] 8.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70475776 unmapped: 819200 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:58.502489+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:27.812397+0000 osd.2 (osd.2) 158 : cluster [DBG] 4.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:27.822923+0000 osd.2 (osd.2) 159 : cluster [DBG] 4.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 159)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:27.812397+0000 osd.2 (osd.2) 158 : cluster [DBG] 4.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:27.822923+0000 osd.2 (osd.2) 159 : cluster [DBG] 4.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70492160 unmapped: 802816 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 892374 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:59.502681+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:28.790005+0000 osd.2 (osd.2) 160 : cluster [DBG] 7.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:28.800551+0000 osd.2 (osd.2) 161 : cluster [DBG] 7.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 161)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:28.790005+0000 osd.2 (osd.2) 160 : cluster [DBG] 7.1c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:28.800551+0000 osd.2 (osd.2) 161 : cluster [DBG] 7.1c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 786432 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:00.502898+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:29.767284+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:29.777794+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 163)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:29.767284+0000 osd.2 (osd.2) 162 : cluster [DBG] 11.11 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:29.777794+0000 osd.2 (osd.2) 163 : cluster [DBG] 11.11 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70508544 unmapped: 786432 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:01.503112+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 778240 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:02.503305+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70516736 unmapped: 778240 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.920908928s of 10.948004723s, submitted: 14
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:03.503449+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:32.744490+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:32.754973+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70524928 unmapped: 770048 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 899617 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 165)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:32.744490+0000 osd.2 (osd.2) 164 : cluster [DBG] 11.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:32.754973+0000 osd.2 (osd.2) 165 : cluster [DBG] 11.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:04.503638+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:33.769593+0000 osd.2 (osd.2) 166 : cluster [DBG] 8.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:33.783594+0000 osd.2 (osd.2) 167 : cluster [DBG] 8.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70533120 unmapped: 761856 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 167)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:33.769593+0000 osd.2 (osd.2) 166 : cluster [DBG] 8.12 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:33.783594+0000 osd.2 (osd.2) 167 : cluster [DBG] 8.12 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:05.503896+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:34.798986+0000 osd.2 (osd.2) 168 : cluster [DBG] 6.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:34.809535+0000 osd.2 (osd.2) 169 : cluster [DBG] 6.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70557696 unmapped: 737280 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 169)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:34.798986+0000 osd.2 (osd.2) 168 : cluster [DBG] 6.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:34.809535+0000 osd.2 (osd.2) 169 : cluster [DBG] 6.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:06.504152+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 729088 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 6.f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:07.504338+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:36.752533+0000 osd.2 (osd.2) 170 : cluster [DBG] 6.f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:36.773676+0000 osd.2 (osd.2) 171 : cluster [DBG] 6.f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70565888 unmapped: 729088 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 171)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:36.752533+0000 osd.2 (osd.2) 170 : cluster [DBG] 6.f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:36.773676+0000 osd.2 (osd.2) 171 : cluster [DBG] 6.f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:08.504550+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:37.794053+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:37.832928+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70590464 unmapped: 704512 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909263 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 173)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:37.794053+0000 osd.2 (osd.2) 172 : cluster [DBG] 9.e scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:37.832928+0000 osd.2 (osd.2) 173 : cluster [DBG] 9.e scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:09.504986+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 696320 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:10.505532+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70598656 unmapped: 696320 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:11.505665+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 688128 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:12.506232+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 688128 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:13.506761+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909263 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70606848 unmapped: 688128 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:14.507205+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70615040 unmapped: 679936 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.063706398s of 12.084164619s, submitted: 10
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:15.507648+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:44.828701+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:44.864183+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 663552 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 175)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:44.828701+0000 osd.2 (osd.2) 174 : cluster [DBG] 9.8 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:44.864183+0000 osd.2 (osd.2) 175 : cluster [DBG] 9.8 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:16.508056+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 655360 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:17.508403+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70639616 unmapped: 655360 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:18.508677+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911674 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 630784 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:19.508934+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 630784 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:20.509173+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70664192 unmapped: 630784 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:21.509454+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 614400 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:22.509602+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70680576 unmapped: 614400 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:23.509766+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 911674 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 606208 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:24.509911+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70688768 unmapped: 606208 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:25.509986+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70705152 unmapped: 589824 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.978821754s of 10.984546661s, submitted: 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:26.510156+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:55.813293+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.17 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:08:55.837953+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.17 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 573440 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 177)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:55.813293+0000 osd.2 (osd.2) 176 : cluster [DBG] 9.17 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:08:55.837953+0000 osd.2 (osd.2) 177 : cluster [DBG] 9.17 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:27.510402+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70721536 unmapped: 573440 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:28.510527+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 914087 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 565248 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:29.510658+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70729728 unmapped: 565248 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:30.510831+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 557056 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:31.511037+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:00.754023+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:00.792862+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70737920 unmapped: 557056 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 179)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:00.754023+0000 osd.2 (osd.2) 178 : cluster [DBG] 9.f scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:00.792862+0000 osd.2 (osd.2) 179 : cluster [DBG] 9.f scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:32.511250+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:01.720993+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:01.749305+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 548864 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 181)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:01.720993+0000 osd.2 (osd.2) 180 : cluster [DBG] 9.c scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:01.749305+0000 osd.2 (osd.2) 181 : cluster [DBG] 9.c scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:33.511538+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 918909 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 548864 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:34.511696+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 548864 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:35.511821+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:04.722000+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.7 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:04.757412+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.7 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70746112 unmapped: 548864 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 183)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:04.722000+0000 osd.2 (osd.2) 182 : cluster [DBG] 9.7 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:04.757412+0000 osd.2 (osd.2) 183 : cluster [DBG] 9.7 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:36.512038+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.6 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.925371170s of 10.942327499s, submitted: 8
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70754304 unmapped: 540672 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.6 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:37.512163+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:06.755607+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.6 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:06.787309+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.6 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 185)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:06.755607+0000 osd.2 (osd.2) 184 : cluster [DBG] 9.6 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:06.787309+0000 osd.2 (osd.2) 185 : cluster [DBG] 9.6 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 516096 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:38.512390+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 923731 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70778880 unmapped: 516096 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:39.512525+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:08.792498+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.19 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:08.834966+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.19 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 187)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:08.792498+0000 osd.2 (osd.2) 186 : cluster [DBG] 9.19 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:08.834966+0000 osd.2 (osd.2) 187 : cluster [DBG] 9.19 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 491520 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:40.512734+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70803456 unmapped: 491520 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:41.512868+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 483328 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:42.513002+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 483328 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:43.513235+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 928557 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70811648 unmapped: 483328 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:44.513458+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:13.721719+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:13.753563+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 189)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:13.721719+0000 osd.2 (osd.2) 188 : cluster [DBG] 9.18 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:13.753563+0000 osd.2 (osd.2) 189 : cluster [DBG] 9.18 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 475136 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:45.513669+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:14.736637+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.13 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  will send 2026-01-20T19:09:14.768534+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.13 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client handle_log_ack log(last 191)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:14.736637+0000 osd.2 (osd.2) 190 : cluster [DBG] 9.13 scrub starts
Jan 20 19:27:18 compute-0 ceph-osd[88112]: log_client  logged 2026-01-20T19:09:14.768534+0000 osd.2 (osd.2) 191 : cluster [DBG] 9.13 scrub ok
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70819840 unmapped: 475136 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:46.513877+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 466944 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:47.514022+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 466944 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:48.514167+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70828032 unmapped: 466944 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:49.514321+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 458752 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:50.514430+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70836224 unmapped: 458752 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:51.514585+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 450560 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:52.514716+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 450560 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:53.514863+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70844416 unmapped: 450560 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:54.514995+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70852608 unmapped: 442368 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:55.515136+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 434176 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:56.515315+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70860800 unmapped: 434176 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:57.515465+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 425984 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:58.515617+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 425984 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:59.515761+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70868992 unmapped: 425984 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:00.516044+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 417792 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:01.516222+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70877184 unmapped: 417792 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:02.516386+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 409600 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:03.516618+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 409600 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:04.516773+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70893568 unmapped: 401408 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:05.516940+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 393216 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:06.517175+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70901760 unmapped: 393216 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:07.517316+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 385024 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:08.517471+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 385024 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:09.517623+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70909952 unmapped: 385024 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:10.517776+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 376832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:11.518036+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 376832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:12.518179+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70918144 unmapped: 376832 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:13.518338+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 368640 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:14.518488+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70926336 unmapped: 368640 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:15.518618+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 360448 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:16.518765+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 360448 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:17.518884+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70934528 unmapped: 360448 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:18.519106+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 352256 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:19.519262+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 352256 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:20.519494+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70959104 unmapped: 335872 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:21.519695+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 327680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:22.519873+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70967296 unmapped: 327680 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:23.520134+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 319488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:24.520336+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 319488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:25.520533+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 319488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:26.520749+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70975488 unmapped: 319488 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:27.520973+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 311296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:28.521150+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70983680 unmapped: 311296 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:29.521313+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 303104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:30.521480+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 303104 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:31.521624+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 294912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:32.521908+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 294912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:33.522084+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71000064 unmapped: 294912 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:34.522252+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 286720 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:35.522445+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71008256 unmapped: 286720 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:36.522707+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 278528 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:37.522848+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 278528 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:38.522978+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 270336 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:39.523196+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 270336 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:40.523348+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71032832 unmapped: 262144 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:41.523516+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 253952 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:42.523687+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 253952 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:43.523867+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 253952 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:44.524104+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 245760 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:45.524306+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71049216 unmapped: 245760 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:46.524533+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 237568 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:47.524679+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 237568 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:48.524842+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 237568 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:49.525005+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 221184 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:50.525133+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 221184 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:51.525322+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 221184 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:52.525706+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 212992 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:53.525863+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71081984 unmapped: 212992 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:54.526094+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 204800 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:55.526231+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 204800 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:56.526422+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71090176 unmapped: 204800 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:57.526528+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 196608 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:58.526662+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71098368 unmapped: 196608 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:59.526791+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 180224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:00.526922+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 180224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:01.527084+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 180224 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:02.527278+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:03.527412+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:04.527579+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:05.527734+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:06.527922+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 172032 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:07.528096+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 163840 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:08.528227+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 163840 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:09.528457+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 155648 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:10.528588+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 155648 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:11.528715+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 155648 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:12.528861+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 147456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:13.528999+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 147456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:14.529136+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 147456 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:15.529275+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71155712 unmapped: 139264 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:16.529485+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 131072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:17.530309+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 131072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:18.530766+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 131072 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:19.531281+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 122880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:20.531589+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 122880 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:21.531726+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 114688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:22.532133+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 114688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:23.532267+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71180288 unmapped: 114688 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:24.532522+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 106496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:25.532782+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 106496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:26.533249+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71188480 unmapped: 106496 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:27.533434+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 98304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:28.533680+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 98304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:29.533809+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71196672 unmapped: 98304 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:30.534254+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 90112 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:31.534567+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71204864 unmapped: 90112 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:32.534783+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 81920 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:33.534933+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71213056 unmapped: 81920 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:34.535083+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 65536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:35.535206+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 65536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:36.535393+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71229440 unmapped: 65536 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:37.535513+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 57344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:38.535677+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71237632 unmapped: 57344 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:39.535926+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 49152 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:40.536106+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 49152 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:41.536333+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 49152 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:42.536507+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 40960 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:43.536651+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 40960 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:44.536794+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 40960 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:45.536953+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 32768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:46.537120+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 32768 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:47.537245+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 24576 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:48.537383+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 24576 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:49.537521+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 24576 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:50.537656+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 16384 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:51.537859+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 16384 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:52.537993+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 8192 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:53.538181+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 8192 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:54.538327+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 0 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:55.538444+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71294976 unmapped: 0 heap: 71294976 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:56.538610+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:57.538794+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 1040384 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:58.538910+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:59.539043+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:00.539247+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 1032192 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:01.539354+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:02.539622+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 1024000 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:03.539802+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:04.539936+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:05.540092+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71327744 unmapped: 1015808 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:06.540306+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:07.540445+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71335936 unmapped: 1007616 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:08.540590+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:09.540781+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71344128 unmapped: 999424 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:10.540908+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:11.541049+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:12.541213+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71352320 unmapped: 991232 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:13.541370+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:14.541579+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:15.541730+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71360512 unmapped: 983040 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:16.541945+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:17.542159+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:18.542400+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71368704 unmapped: 974848 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:19.542604+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 966656 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:20.542726+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71376896 unmapped: 966656 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:21.542819+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 958464 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:22.542966+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 958464 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:23.543125+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71385088 unmapped: 958464 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:24.543303+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:25.543489+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71393280 unmapped: 950272 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:26.543691+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:27.543820+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:28.543988+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71401472 unmapped: 942080 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:29.544285+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:30.544847+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:31.545210+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 925696 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:32.545388+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:33.545741+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 917504 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:34.546353+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:35.546682+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71434240 unmapped: 909312 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:36.547004+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:37.547727+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:38.548039+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71442432 unmapped: 901120 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:39.548302+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 892928 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:40.548467+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71450624 unmapped: 892928 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:41.548582+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 884736 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:42.548733+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 884736 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:43.548875+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71458816 unmapped: 884736 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:44.549163+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 876544 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:45.549306+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 876544 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:46.549425+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71467008 unmapped: 876544 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:47.549604+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:48.549731+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:49.549875+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71475200 unmapped: 868352 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:50.550045+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 860160 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:51.550192+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71483392 unmapped: 860160 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:52.550335+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:53.550486+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71491584 unmapped: 851968 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:54.550627+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:55.551493+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:56.551673+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:57.552165+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 843776 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:58.552327+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:59.552690+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71507968 unmapped: 835584 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:00.553177+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Jan 20 19:27:18 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002882335' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:01.553447+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:02.553603+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 827392 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:03.553933+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71524352 unmapped: 819200 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:04.554066+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71532544 unmapped: 811008 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:05.554219+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:06.554428+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:07.554723+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71540736 unmapped: 802816 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:08.555120+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 794624 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:09.555273+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71548928 unmapped: 794624 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:10.555555+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:11.555701+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:12.555835+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71557120 unmapped: 786432 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:13.556001+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:14.556144+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71565312 unmapped: 778240 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:15.556475+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71589888 unmapped: 753664 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:16.556658+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:17.556865+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71598080 unmapped: 745472 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:18.557067+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:19.557249+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:20.557417+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:21.557592+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:22.557720+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71606272 unmapped: 737280 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:23.557844+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 729088 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:24.557948+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71614464 unmapped: 729088 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:25.558063+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 720896 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:26.558227+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 720896 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:27.558425+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71622656 unmapped: 720896 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:28.559424+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:29.559753+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:30.560424+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71630848 unmapped: 712704 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:31.561391+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:32.561812+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71639040 unmapped: 704512 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:33.562133+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:34.562406+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:35.562540+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71647232 unmapped: 696320 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:36.563262+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 688128 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:37.563627+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 688128 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:38.564079+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71655424 unmapped: 688128 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:39.564307+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:40.564475+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71663616 unmapped: 679936 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:41.564773+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:42.565098+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71671808 unmapped: 671744 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:43.565237+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:44.565417+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:45.565580+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71680000 unmapped: 663552 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:46.565914+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 655360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:47.566050+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71688192 unmapped: 655360 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:48.566188+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:49.566328+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:50.566587+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71696384 unmapped: 647168 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:51.566761+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:52.566941+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71704576 unmapped: 638976 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:53.567142+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:54.567425+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71712768 unmapped: 630784 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:55.567616+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:56.567769+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71720960 unmapped: 622592 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:57.567904+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:58.568021+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:59.568144+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71729152 unmapped: 614400 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:00.568269+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:01.568394+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:02.568555+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71737344 unmapped: 606208 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:03.568675+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71745536 unmapped: 598016 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:04.568812+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:05.568985+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71761920 unmapped: 581632 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:06.569172+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:07.569274+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:08.569379+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71770112 unmapped: 573440 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:09.569559+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:10.569705+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:11.569814+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71778304 unmapped: 565248 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 5409 writes, 23K keys, 5409 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5409 writes, 759 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5409 writes, 23K keys, 5409 commit groups, 1.0 writes per commit group, ingest: 18.48 MB, 0.03 MB/s
                                           Interval WAL: 5409 writes, 759 syncs, 7.13 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 9e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.3e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:12.569935+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:13.570098+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71852032 unmapped: 491520 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:14.570232+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:15.570431+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71860224 unmapped: 483328 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:16.570601+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:17.570784+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:18.570950+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71868416 unmapped: 475136 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:19.571167+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 466944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:20.571297+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71876608 unmapped: 466944 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:21.571470+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71884800 unmapped: 458752 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:22.571723+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 442368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:23.571880+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71901184 unmapped: 442368 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:24.572048+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:25.572188+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71909376 unmapped: 434176 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:26.572354+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:27.572537+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:28.572657+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71917568 unmapped: 425984 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:29.572791+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:30.572924+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71925760 unmapped: 417792 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:31.573084+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:32.573223+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:33.573384+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71933952 unmapped: 409600 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:34.573502+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:35.573631+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:36.573802+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71942144 unmapped: 401408 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:37.573959+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:38.574095+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:39.574208+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71950336 unmapped: 393216 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:40.574419+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:41.574581+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71958528 unmapped: 385024 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:42.574755+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:43.574973+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71966720 unmapped: 376832 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:44.575155+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:45.575314+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:46.575527+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71974912 unmapped: 368640 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:47.575663+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:48.575809+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71983104 unmapped: 360448 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:49.575920+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:50.576040+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:51.576162+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71991296 unmapped: 352256 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:52.576314+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 71999488 unmapped: 344064 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 316.272521973s of 316.286499023s, submitted: 8
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:53.576448+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 72302592 unmapped: 40960 heap: 72343552 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:54.576563+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:55.576703+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:56.576854+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:57.576959+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:58.577092+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:59.577205+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73588736 unmapped: 851968 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:00.577329+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:01.577456+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:02.577584+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 843776 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:03.577809+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:04.577954+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73605120 unmapped: 835584 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:05.578118+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 819200 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:06.578334+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 819200 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:07.578469+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 811008 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:08.578594+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 811008 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:09.578743+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73637888 unmapped: 802816 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:10.578880+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 794624 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:11.578995+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 794624 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:12.579114+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 794624 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:13.579240+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 786432 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:14.579486+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73654272 unmapped: 786432 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:15.579666+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:16.579855+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:17.580016+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 770048 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:18.580157+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 770048 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:19.580282+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 770048 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:20.580450+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:21.580659+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:22.580849+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:23.580990+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 778240 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:24.581130+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:25.581261+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:26.581484+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 761856 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:27.581635+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 753664 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:28.581757+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 753664 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:29.581903+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 737280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:30.582042+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 737280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:31.582179+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 737280 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:32.582320+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 729088 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:33.582449+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 729088 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:34.582567+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 712704 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:35.582697+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 712704 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:36.582876+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 704512 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:37.583112+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 704512 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:38.583269+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 704512 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:39.583421+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 696320 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:40.583616+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 696320 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:41.583807+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:42.583951+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:43.584146+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 688128 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:44.584308+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 679936 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:45.584433+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73760768 unmapped: 679936 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:46.584653+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 671744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:47.584775+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 671744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:48.584959+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73768960 unmapped: 671744 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:49.585091+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 663552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:50.585229+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 663552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:51.585399+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 663552 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:52.585519+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 655360 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:53.585656+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 655360 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:54.585803+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:55.585957+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:56.586164+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 647168 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:57.586298+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 638976 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:58.586446+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 638976 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:59.586551+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:00.586675+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:01.586799+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73818112 unmapped: 622592 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:02.586929+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 614400 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:03.587050+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 614400 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:04.587172+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 598016 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:05.587301+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 598016 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:06.587415+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 589824 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:07.587544+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 589824 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:08.587670+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 589824 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:09.587798+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 581632 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:10.588033+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73859072 unmapped: 581632 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:11.588152+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 573440 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:12.588282+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 573440 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:13.588423+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:14.588584+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:15.588703+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:16.588851+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:17.588971+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:18.589085+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:19.589217+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:20.589338+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:21.589421+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:22.589542+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:23.589674+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 565248 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:24.589806+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:25.589930+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:26.590145+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:27.590299+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:28.590445+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:29.590574+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:30.590710+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:31.590828+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:32.590946+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:33.591071+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:34.591206+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:35.591339+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:36.591540+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:37.591695+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:38.591831+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 557056 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:39.592685+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:40.592804+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:41.592939+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:42.593075+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:43.593216+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:44.593345+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:45.593553+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:46.593738+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:47.593939+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:48.594079+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:49.594280+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:50.594420+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:51.594674+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:52.594827+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:53.594945+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:54.595095+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:55.595233+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:56.595468+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:57.595625+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:58.595843+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 540672 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:59.596017+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:00.596147+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:01.596317+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:02.596423+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:03.596634+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73908224 unmapped: 532480 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:04.596766+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:05.596900+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:06.597069+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:07.597193+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:08.597337+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:09.597470+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:10.597620+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:11.597759+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:12.597888+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:13.598039+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 524288 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:14.598169+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:15.598310+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:16.598470+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:17.598626+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:18.598887+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:19.599051+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:20.599209+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:21.599383+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:22.599528+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:23.599688+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:24.599821+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:25.599938+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:26.600133+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:27.600274+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:28.600439+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 516096 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:29.600585+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:30.600753+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:31.600875+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:32.601007+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:33.601193+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73932800 unmapped: 507904 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:34.601431+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:35.601566+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:36.601720+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:37.601871+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:38.602006+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:39.602148+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:40.602276+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:41.602417+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:42.602582+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:43.602839+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:44.603042+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 499712 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:45.603191+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:46.604036+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:47.604180+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:48.604344+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:49.604540+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:50.604687+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:51.604903+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:52.605094+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:53.605265+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:54.605440+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:55.605637+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:56.605887+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:57.606126+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:58.606416+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:59.606670+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73949184 unmapped: 491520 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:00.606874+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:01.607064+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:02.607227+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 475136 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:03.607430+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:04.607608+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:05.607771+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:06.607921+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:07.608090+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:08.608217+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:09.608330+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:10.608420+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:11.608558+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:12.608699+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:13.608834+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:14.608967+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:15.609074+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:16.609217+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:17.609412+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:18.609540+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:19.609739+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73973760 unmapped: 466944 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:20.609927+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:21.610108+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:22.610270+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:23.610452+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:24.610614+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:25.610784+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:26.610979+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:27.611106+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:28.611249+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:29.611434+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:30.611581+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:31.611749+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:32.611902+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:33.612065+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:34.612236+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:35.612397+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:36.612568+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:37.612732+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:38.613015+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:39.613138+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:40.613304+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:41.613445+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:42.613579+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:43.613742+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:44.613881+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:45.614008+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:46.614181+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:47.614318+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:48.614436+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:49.614563+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:50.616914+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:51.618853+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:52.619354+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:53.619503+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:54.619633+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:55.619786+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:56.620794+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:57.621269+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:58.622063+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:59.622492+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:00.623098+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:01.623237+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:02.623422+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:03.623549+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73981952 unmapped: 458752 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:04.623671+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:05.623797+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:06.624134+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:07.624307+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:08.624472+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:09.624610+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:10.624781+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:11.624908+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:12.625047+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:13.625198+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:14.625385+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 450560 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:15.625514+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:16.625701+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:17.625857+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:18.626005+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:19.626192+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 442368 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: mgrc ms_handle_reset ms_handle_reset con 0x5564ed496000
Jan 20 19:27:18 compute-0 ceph-osd[88112]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/894791725
Jan 20 19:27:18 compute-0 ceph-osd[88112]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/894791725,v1:192.168.122.100:6801/894791725]
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: get_auth_request con 0x5564ee510c00 auth_method 0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: mgrc handle_mgr_configure stats_period=5
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:20.626401+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:21.626533+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:22.626692+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:23.626845+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 335872 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:24.626978+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:25.627122+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:26.627286+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:27.627431+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:28.627561+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:29.627710+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:30.627849+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:31.628428+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:32.628653+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:33.628845+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 360448 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:34.628980+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:35.629121+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:36.629288+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:37.629439+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:38.629616+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74088448 unmapped: 352256 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:39.629779+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:40.629952+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:41.630091+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:42.630254+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:43.630459+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:44.630616+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:45.630752+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:46.630946+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:47.631096+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:48.631261+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:49.631437+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:50.631580+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930970 data_alloc: 218103808 data_used: 5487
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:51.631748+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:52.631897+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74096640 unmapped: 344064 heap: 74440704 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.854583740s of 300.153442383s, submitted: 90
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: handle_auth_request added challenge on 0x5564ef2f6800
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:53.632049+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:54.632216+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:55.632353+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:56.632508+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:57.632580+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:58.632698+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:59.632817+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:00.632936+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:01.633048+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:02.633179+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:03.633301+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:04.633423+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:05.633537+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:06.633712+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:07.633854+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:08.633980+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:09.634131+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:10.634321+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:11.634484+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:12.634625+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:13.634762+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:14.634907+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:15.635058+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:16.635212+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:17.635346+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:18.635515+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1269760 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:19.635629+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:20.635806+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:21.635920+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:22.636061+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:23.636211+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:24.636377+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:25.636497+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:26.636682+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:27.636832+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:28.636988+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:29.637131+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:30.637271+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:31.637419+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:32.637621+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:33.637817+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:34.637983+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:35.638159+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:36.638395+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:37.638520+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:38.638678+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:39.638835+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:40.639003+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:41.639193+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:42.639397+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:43.639558+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1236992 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:44.639703+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:45.639867+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:46.640061+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:47.640234+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:48.640446+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:49.640625+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:50.640853+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:51.641034+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:52.641186+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:53.641402+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:54.641664+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:55.641860+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:56.642138+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:57.642425+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:58.642601+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:59.642726+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:00.642883+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:01.643029+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:02.643206+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:03.643406+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:04.643530+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:05.643614+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:06.643801+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:07.643940+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:08.644091+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:09.644227+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:10.644405+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:11.644535+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:12.644703+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:13.644869+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:14.645018+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:15.645158+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:16.646131+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:17.646310+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:18.646442+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:19.646618+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:20.646766+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:21.646894+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:22.647048+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:23.647250+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1179648 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:24.647604+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:25.647842+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:26.648072+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:27.648205+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:28.648353+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:29.648512+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:30.648705+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:31.648889+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:32.649199+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:33.649339+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1253376 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:34.649521+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:35.649633+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:36.649766+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:37.649888+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:38.650040+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:39.650177+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:40.650437+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:41.650570+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:42.650744+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:43.650891+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:44.651038+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:45.651159+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:46.651315+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:47.651419+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:48.651550+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:49.651811+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:50.651935+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:51.652580+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:52.652752+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:53.652869+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:54.652976+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:55.653092+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:56.653223+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:57.653512+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:58.653645+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1261568 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:59.653835+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:00.654065+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:01.654270+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:02.654473+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:03.654659+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:04.654821+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:05.654990+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:06.655171+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:07.655350+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:08.655615+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1245184 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:09.655762+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:10.655917+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:11.656082+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:12.656271+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:13.656414+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:14.656606+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:15.656778+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:16.656979+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:17.657112+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:18.657262+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1228800 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:19.657431+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:20.657609+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:21.657755+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:22.657885+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:23.658020+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:24.658184+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:25.658317+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:26.658514+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:27.658633+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:28.658760+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:29.658903+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:30.659055+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:31.659196+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:32.659323+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:33.659433+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:34.659551+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:35.659808+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:36.660116+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:37.660279+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:38.660431+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:39.660564+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:40.660688+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:41.660821+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:42.660912+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:43.661070+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:44.661253+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:45.661438+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:46.661603+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:47.661755+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:48.661914+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:49.662113+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:50.662250+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:51.662429+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:52.662590+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:53.662719+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:54.662962+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:55.663119+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:56.663351+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:57.663596+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:58.663790+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:59.663958+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:00.664131+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:01.664307+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:02.664436+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:03.664577+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:04.664729+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:05.664848+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:06.665012+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:07.665629+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:08.665979+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:09.666167+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:10.667036+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:11.667318+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:12.667491+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:13.668336+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:14.669095+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1220608 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:15.669777+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:16.670479+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:17.671063+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:18.671298+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:19.671787+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:20.672155+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:21.672472+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:22.672634+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:23.673008+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:24.673525+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:25.673978+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:26.674172+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:27.674451+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:28.674761+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1212416 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:29.674997+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:30.675576+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:31.675917+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread fragmentation_score=0.000142 took=0.000036s
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:32.676156+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:33.676429+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:34.676636+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:35.676848+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:36.677097+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:37.677226+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:38.677430+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:39.677601+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:40.677760+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:41.677889+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:42.678036+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:43.678198+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:44.678415+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:45.678557+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:46.678730+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:47.678934+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:48.679085+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:49.679235+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:50.679385+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:51.679504+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:52.679622+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:53.679767+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:54.679928+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:55.680074+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:56.680276+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:57.680444+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:58.680595+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:59.680887+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:00.681061+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:01.681217+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:02.681458+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:03.681682+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:04.681858+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:05.682024+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:06.682460+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:07.682689+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:08.682900+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1204224 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:09.683197+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:10.683438+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:11.683623+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1196032 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 5637 writes, 24K keys, 5637 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5637 writes, 873 syncs, 6.46 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 228 writes, 342 keys, 228 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 228 writes, 114 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.012       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd134b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 5e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.01              0.00         1    0.009       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5564ebd13a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:12.683784+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:13.684005+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:14.684209+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:15.684413+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:16.684617+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:17.684816+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1163264 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:18.685025+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1155072 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:19.685174+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:20.685332+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:21.685495+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:22.685669+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:23.685807+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1146880 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:24.686010+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:25.686135+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:26.686339+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:27.686562+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:28.686708+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:29.686866+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:30.687010+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:31.687147+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:32.687317+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:33.687449+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1130496 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:34.687629+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:35.687772+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:36.687964+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:37.688129+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:38.688280+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:39.688467+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:40.688617+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:41.688773+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:42.688957+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:43.689078+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:44.689205+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:45.689321+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:46.689463+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:47.689590+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:48.689754+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:49.689901+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:50.690004+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:51.690144+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1114112 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:52.690242+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 299.902282715s of 299.933044434s, submitted: 24
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1105920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:53.690425+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1105920 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:54.690576+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:55.690705+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:56.690985+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:57.691176+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:58.691301+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:59.691422+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:00.691532+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:01.691691+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:02.691811+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:03.691919+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 802816 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:04.692061+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:05.692192+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:06.692344+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:07.692525+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:08.692647+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:09.692754+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:10.692918+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:11.693061+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:12.693232+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:13.693374+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:14.693516+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:15.693767+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:16.693995+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:17.694155+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:18.695077+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:19.695538+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:20.695800+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:21.696409+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:22.696613+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:23.697111+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 794624 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:24.697323+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:25.697462+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:26.697725+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:27.697880+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:28.698021+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74702848 unmapped: 786432 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:29.698265+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:30.698426+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:31.698597+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:32.698730+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:33.699041+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:34.699288+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:35.699497+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:36.699751+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:37.699977+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:38.700095+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:39.700328+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:40.700525+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:41.700943+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:42.701119+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:43.701329+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:44.701527+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:45.701734+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:46.701903+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:47.702057+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:48.702181+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:49.702397+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:50.702596+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:51.702742+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:52.702884+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:53.703080+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 778240 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:54.703305+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 770048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:55.703450+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 770048 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:56.703605+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:57.703818+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:58.704005+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74727424 unmapped: 761856 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:59.704182+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:00.704352+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:01.704639+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:02.704872+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:03.705111+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:04.705338+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:05.705505+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:06.705693+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:07.705844+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:08.705984+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:09.706200+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74743808 unmapped: 745472 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:10.706298+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:11.706433+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:12.706530+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:13.706673+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:14.706783+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:15.706891+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:16.707073+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:17.707193+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:18.707301+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:19.707546+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:20.707685+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:21.707836+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:22.708036+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:23.708291+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:24.708482+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 737280 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:25.708662+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:26.708831+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:27.709000+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:28.709154+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:29.709344+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:30.709539+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:31.709712+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:32.709985+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:33.710137+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 720896 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:34.710252+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:35.710487+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:36.710677+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:37.710846+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:38.710981+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:39.711129+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74776576 unmapped: 712704 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:40.711295+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:41.711524+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:42.711679+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:43.711814+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:44.711999+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 704512 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:45.712194+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:46.712386+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:47.712519+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:48.712686+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:49.712869+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:50.713003+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:51.713114+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:52.713395+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:53.713543+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:54.713686+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74801152 unmapped: 688128 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:55.713800+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:56.716450+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:57.716574+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:58.716711+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 679936 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:59.716894+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:00.717034+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:01.717189+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:02.717321+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:03.717454+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:04.717575+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:05.717756+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:06.717915+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:07.718064+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:08.718225+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:09.718417+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:10.718559+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:11.718727+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:12.718857+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:13.719018+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:14.719131+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 671744 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:15.719275+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:16.719470+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:17.719591+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:18.719742+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:19.719873+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:20.720060+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:21.720217+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:22.720393+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:23.720536+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:24.720845+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:25.721442+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:26.722544+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:27.722677+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:28.723016+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:29.723552+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 663552 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:30.723692+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:31.724300+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:32.724583+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:33.724709+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:34.724863+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:35.725025+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:36.725315+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:37.725584+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:38.725851+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:39.726157+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:40.726289+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:41.726426+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:42.726548+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:43.726682+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 655360 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:44.726865+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 647168 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:45.727226+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'config diff' '{prefix=config diff}'
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'config show' '{prefix=config show}'
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 286720 heap: 75489280 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:46.727402+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 75407360 unmapped: 2179072 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: osd.2 122 heartbeat osd_stat(store_statfs(0x4fceb7000/0x0/0x4ffc00000, data 0xba331/0x175000, compress 0x0/0x0/0x0, omap 0x10b0c, meta 0x2bbf4f4), peers [0,1] op hist [])
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: tick
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_tickets
Jan 20 19:27:18 compute-0 ceph-osd[88112]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:47.727511+0000)
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:18 compute-0 ceph-osd[88112]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:18 compute-0 ceph-osd[88112]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 932122 data_alloc: 218103808 data_used: 6867
Jan 20 19:27:18 compute-0 ceph-osd[88112]: prioritycache tune_memory target: 4294967296 mapped: 75685888 unmapped: 1900544 heap: 77586432 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:18 compute-0 ceph-osd[88112]: do_command 'log dump' '{prefix=log dump}'
Jan 20 19:27:18 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} v 0)
Jan 20 19:27:18 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:27:18 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:27:18 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:27:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0)
Jan 20 19:27:19 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1755243641' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 20 19:27:19 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:19 compute-0 podman[246361]: 2026-01-20 19:27:19.429299214 +0000 UTC m=+0.100243111 container health_status c2dee9fcaee559b048034bb424075120f3d26ede15515d7e7d492be2a233177a (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 20 19:27:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} v 0)
Jan 20 19:27:19 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:27:19 compute-0 ceph-mon[75120]: from='client.14442 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:19 compute-0 ceph-mon[75120]: pgmap v856: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:19 compute-0 ceph-mon[75120]: from='client.14444 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:19 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/4002882335' entity='client.admin' cmd={"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} : dispatch
Jan 20 19:27:19 compute-0 ceph-mon[75120]: from='client.14448 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:19 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:27:19 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1755243641' entity='client.admin' cmd={"prefix": "mgr dump"} : dispatch
Jan 20 19:27:19 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Jan 20 19:27:19 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2287267857' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 20 19:27:19 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14456 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:20 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Jan 20 19:27:20 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3013419084' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 20 19:27:20 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:20 compute-0 ceph-mon[75120]: from='client.14452 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:20 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:27:20 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2287267857' entity='client.admin' cmd={"prefix": "mgr metadata"} : dispatch
Jan 20 19:27:20 compute-0 ceph-mon[75120]: from='client.14456 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:20 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3013419084' entity='client.admin' cmd={"prefix": "mgr module ls"} : dispatch
Jan 20 19:27:20 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:20 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Jan 20 19:27:20 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3861527620' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 20 19:27:21 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14466 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:21 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0)
Jan 20 19:27:21 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1651649235' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 20 19:27:21 compute-0 ceph-mon[75120]: pgmap v857: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:21 compute-0 ceph-mon[75120]: from='client.14460 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:21 compute-0 ceph-mon[75120]: from='client.14463 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:21 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3861527620' entity='client.admin' cmd={"prefix": "mgr services"} : dispatch
Jan 20 19:27:21 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1651649235' entity='client.admin' cmd={"prefix": "mgr versions"} : dispatch
Jan 20 19:27:21 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14470 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:22 compute-0 podman[246676]: 2026-01-20 19:27:22.016439713 +0000 UTC m=+0.088264579 container health_status 155196fbbc13b092614ceb96241eb7ff27bea53d8762b2bd75af0f0fbbdbacef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '730e8569771a791d61f8e4909662c7fdda8a98882b5b5d6fa114d9f0d1022893-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-89f0284f735e59dd539cf5afdfee5247298635ac92b43ebe7ee59e5f6be6c08e-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 20 19:27:22 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0)
Jan 20 19:27:22 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911391847' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 20 19:27:22 compute-0 crontab[246746]: (root) LIST (root)
Jan 20 19:27:22 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:22 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14474 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:22 compute-0 ceph-mon[75120]: from='client.14466 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:22 compute-0 ceph-mon[75120]: from='client.14470 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:22 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1911391847' entity='client.admin' cmd={"prefix": "mon stat"} : dispatch
Jan 20 19:27:22 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14478 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:22 compute-0 ceph-mgr[75417]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 19:27:22 compute-0 ceph-90fff835-31df-513f-a409-b6642f04e6ac-mgr-compute-0-meyjbf[75413]: 2026-01-20T19:27:22.893+0000 7f97a9c36640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Jan 20 19:27:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0)
Jan 20 19:27:23 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/859625886' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 20 19:27:23 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Jan 20 19:27:23 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3285311335' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.038358 7 0.000066
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.046756 7 0.000070
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.084935 3 0.000053
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.084972 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.074727 2 0.000031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.074943 2 0.000035
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000013 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 54 heartbeat osd_stat(store_statfs(0x4fe084000/0x0/0x4ffc00000, data 0xb566e/0x146000, compress 0x0/0x0/0x0, omap 0x6c12, meta 0x1a293ee), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 76365824 unmapped: 1302528 heap: 77668352 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:23.680864+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 1.014564 1 0.000153
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000029 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.089642 2 0.000030
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000011 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 54 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.117679 3 0.000094
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125604 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.118566 3 0.000050
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125450 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.119371 3 0.000148
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125169 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120096 3 0.000127
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.126372 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120069 3 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.126103 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120401 3 0.000066
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125901 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120613 3 0.000078
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.122530 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.120649 3 0.000614
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125649 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121010 3 0.000054
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125302 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122465 3 0.000073
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125717 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122926 3 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125439 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.122609 3 0.000686
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.125040 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123005 3 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.124993 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123054 3 0.000053
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.123425 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.121706 3 0.000859
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.124183 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.123373 3 0.001231
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.123906 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000124 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000022
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000016 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000487 1 0.000062
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000101 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000010 1 0.000027
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000147 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000227 1 0.000273
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000014 1 0.000218
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000011 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000217 1 0.000054
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000057 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=0 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000008 1 0.000019
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000070 1 0.000035
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001970 2 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002202 2 0.000055
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001486 2 0.000062
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001363 2 0.000035
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.077669 5 0.000225
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.078403 5 0.000488
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.077875 5 0.000229
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.076118 5 0.000223
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.074394 5 0.000301
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.076301 5 0.000166
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.077339 5 0.000578
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.074457 5 0.000212
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.073436 5 0.001158
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.073785 5 0.000320
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.5( v 50'484 (0'0,50'484] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.075900 5 0.000219
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.073444 5 0.000275
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.076401 5 0.000289
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.077128 5 0.000192
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.073685 5 0.000404
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.074048 5 0.000183
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.269928 4 0.000140
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.359731 5 0.000045
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000017 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 76996608 unmapped: 671744 heap: 77668352 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.442528 5 0.000038
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.442583 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067827 1 0.000090
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.427710 5 0.000062
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000014 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.156294 1 0.000171
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000113 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.584173 5 0.000029
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.602897 5 0.000030
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.602974 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066547 1 0.000466
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000032 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/45 les/c/f=54/47/0 sis=53) [1] r=0 lpr=53 pi=[45,53)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.663707 5 0.000087
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000031 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003135 1 0.000274
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.14( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.654671 5 0.000090
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000028 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003404 1 0.000161
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[10.12( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [1] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.477061 1 0.000365
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000777 1 0.000043
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 55 heartbeat osd_stat(store_statfs(0x4fcede000/0x0/0x4ffc00000, data 0xb740f/0x14a000, compress 0x0/0x0/0x0, omap 0x86c7, meta 0x2bc7939), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.292979 2 0.000106
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.770450 1 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000634 1 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.030917 2 0.000109
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.802275 1 0.000040
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000484 1 0.000086
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:24.680998+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.048339 2 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.851225 1 0.000024
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000439 1 0.000124
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.892049 1 0.000032
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.040787 2 0.000070
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000481 1 0.000043
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 55 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 55 handle_osd_map epochs [55,56], i have 55, src has [1,56]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 55 handle_osd_map epochs [56,56], i have 56, src has [1,56]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.116025 1 0.000305
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.996921 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.122547 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.122576 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080907822s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.078788757s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] exit Reset 0.000131 1 0.000201
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080821991s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078788757s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.148209 1 0.000178
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.996981 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.122452 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.122488 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080473900s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.078704834s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] exit Reset 0.000205 1 0.000285
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080350876s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078704834s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.067833 1 0.000279
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.997083 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.122274 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.122309 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988234 2 0.000062
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.990006 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080650330s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.079368591s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] exit Reset 0.000246 1 0.000334
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] exit Start 0.000019 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.080478668s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079368591s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.988690 2 0.000141
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.991206 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.027155 1 0.000179
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 0.995929 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.118480 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.118524 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079759598s) [0] async=[0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 active pruub 94.079399109s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989159 2 0.000066
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.990651 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] exit Reset 0.000656 1 0.000694
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56 pruub=15.079503059s) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079399109s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991074 2 0.000092
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.993616 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.004315 3 0.000175
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004834 3 0.000155
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000016 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+recovering+remapped rops=1 mbc={255={(0+1)=5}}] exit Started/Primary/Active/Recovering 0.031165 4 0.000159
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000209 1 0.000103
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped rops=1 mbc={255={(0+1)=5}}] enter Started/Primary/Active/NotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped rops=1 mbc={255={(0+1)=5}}] exit Started/Primary/Active/NotRecovering 0.000656 1 0.000242
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped rops=1 mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.005892 4 0.000108
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005212 4 0.000096
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.083210 3 0.000045
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=55/56 n=2 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.081427 2 0.000027
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000040 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 lc 39'19 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.063482 1 0.000286
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000024 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/45 les/c/f=56/47/0 sis=55) [1] r=0 lpr=55 pi=[45,55)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.070942 4 0.000077
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.001037 1 0.000082
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.058651 2 0.000051
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.130765 4 0.000087
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000447 1 0.000221
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.054018 2 0.000094
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.185593 4 0.000058
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000839 1 0.000042
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052077 2 0.000055
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.238614 4 0.000044
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000793 1 0.000078
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 573440 heap: 77668352 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066028 2 0.000076
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.305599 4 0.000071
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000622 1 0.000085
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.068350 2 0.000090
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.374724 4 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000594 1 0.000106
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066269 2 0.000062
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.441692 4 0.000445
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000494 1 0.000079
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.061298 2 0.000038
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.503646 4 0.000069
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000568 1 0.000108
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.543641 4 0.000081
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039367 2 0.000047
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000760 1 0.000036
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059181 2 0.000045
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.603644 4 0.000188
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000411 1 0.000093
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.038279 2 0.000079
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.642478 4 0.000174
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000411 1 0.000054
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.024208 2 0.000077
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.742976 1 0.000033
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000454 1 0.000088
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.032500 2 0.000053
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.895127 7 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.895217 7 0.000028
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.895246 7 0.000172
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.290757179s of 10.140542030s, submitted: 785
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875329 7 0.000049
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875296 7 0.000031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875368 7 0.000027
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875462 7 0.000019
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875570 7 0.000086
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875676 7 0.000075
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875736 7 0.000040
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875783 7 0.000099
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875810 7 0.000051
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875875 7 0.000034
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875925 7 0.000019
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875970 7 0.000089
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876020 7 0.000036
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876105 7 0.000039
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876165 7 0.000029
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876208 7 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876203 7 0.000141
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876234 7 0.000092
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876279 7 0.000026
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876307 7 0.000110
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876371 7 0.000020
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876412 7 0.000140
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876484 7 0.000120
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876520 7 0.000021
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876578 7 0.000024
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876655 7 0.000029
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876720 7 0.000022
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876779 7 0.000024
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876831 7 0.000020
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876935 7 0.000032
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1c( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877011 7 0.000021
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.16( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876899 7 0.000025
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876945 7 0.000043
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876993 7 0.000022
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1c( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877116 7 0.000037
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877182 7 0.000022
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.18( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877266 7 0.000042
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1b( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877282 7 0.000084
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.11( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877258 7 0.000094
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.18( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876351 7 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.10( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876460 7 0.000069
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1b( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876522 7 0.000076
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876599 7 0.000096
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.10( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876648 7 0.000046
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.b( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876686 7 0.000041
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.4( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876723 7 0.000042
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876799 7 0.000020
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876888 7 0.000037
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.4( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876943 7 0.000046
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.18( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.876977 7 0.000021
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.9( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877057 7 0.000020
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.c( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.877136 7 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.14( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874426 7 0.000033
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.3( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874489 7 0.000039
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.9( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874511 7 0.000024
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.e( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874592 7 0.000027
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.6( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874552 7 0.000047
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.6( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874609 7 0.000030
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.f( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874655 7 0.000022
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.3( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874707 7 0.000026
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.e( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874753 7 0.000020
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.17( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874815 7 0.000038
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.6( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874896 7 0.000043
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.874937 7 0.000022
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.f( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875012 7 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.9( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875016 7 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.13( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875097 7 0.000026
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.15( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875195 7 0.000055
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.a( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875278 7 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875313 7 0.000027
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.19( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875395 7 0.000046
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875438 7 0.000054
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875506 7 0.000023
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875566 7 0.000039
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1f( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875627 7 0.000058
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875623 7 0.000029
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.17( empty local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875687 7 0.000064
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1b( empty local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875701 7 0.000018
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.12( empty local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.875777 7 0.000041
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 2.827224 7 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.12( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.461630 4 0.000104
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.300950 4 0.000215
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.008881 1 0.000121
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.904067 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.15( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 3.925059 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.015372 1 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.910651 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.931866 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.022082 1 0.000091
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.897467 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1d( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.938448 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.029607 1 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.924864 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.1e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.946022 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.036691 1 0.000081
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.912041 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.949494 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.044055 1 0.000067
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.919475 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.15( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.960664 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.051389 1 0.000063
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.926916 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.3( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.963858 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.058729 1 0.000074
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.934360 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.971902 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066631 1 0.000065
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.942342 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.12( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.982361 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.080622 1 0.000039
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.956448 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.d( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 3.995032 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.086510 1 0.000049
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.962338 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.1( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.000852 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088245 1 0.000030
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.964126 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.002297 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095207 1 0.000031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.971128 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.2( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.006259 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:25.681129+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102565 1 0.000054
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.978534 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.017090 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110029 1 0.000065
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.986051 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.8( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.020876 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.118804 1 0.000045
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 2.994911 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.9( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.031601 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124943 1 0.000051
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.001141 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.037869 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.132050 1 0.000041
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.008251 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.047037 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.139342 1 0.000034
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.015603 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.2( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.054910 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146716 1 0.000040
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.022966 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.11( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.063160 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.154511 1 0.000027
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.030811 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.5( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.068773 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.161507 1 0.000037
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.037824 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.4( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.072262 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.168825 1 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.045184 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.2( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.082589 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.176371 1 0.000033
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.052779 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.15( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.074732 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.183410 1 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.059869 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.e( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.096126 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.190624 1 0.000040
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.067140 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.8( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.102283 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.198020 1 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.074576 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.109274 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.205338 1 0.000053
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.081961 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[3.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.116043 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.212684 1 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.089384 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.a( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.124129 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.220064 1 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.096815 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[8.1b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.131155 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.227826 1 0.000034
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.104637 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[7.11( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.137307 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.235362 1 0.000079
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.112249 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 56 pg[11.1a( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.146135 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 56 handle_osd_map epochs [57,57], i have 56, src has [1,57]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.759197 1 0.000165
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.022317 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.148730 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.148770 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.570116 1 0.000172
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started/Primary/Active 2.019351 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053964615s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.078872681s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started/Primary 3.144408 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] exit Started 3.144458 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=50'484 lcod 55'485 mlcod 55'485 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.814267 1 0.000109
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.021422 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.147090 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.147128 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] exit Reset 0.000284 1 0.000415
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053746223s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.078872681s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054486275s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079605103s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] exit Reset 0.000341 1 0.000413
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054221153s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079605103s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.341688 1 0.000175
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.022650 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.148780 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.148813 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054390907s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079986572s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] exit Reset 0.000121 1 0.000195
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054670334s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 active pruub 94.079673767s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.442018 1 0.000167
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.022548 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.054303169s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079986572s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.148470 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.148580 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] exit Reset 0.000867 1 0.000914
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] exit Start 0.000019 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053636551s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079513550s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053840637s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY pruub 94.079673767s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] exit Reset 0.000113 1 0.000244
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053556442s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079513550s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.707401 1 0.000573
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020635 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.146095 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.146122 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.504197 1 0.000223
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.245703 1 0.000117
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020802 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.303518 1 0.000132
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.146547 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.019629 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.143552 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053482056s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079612732s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.143586 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.022082 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.146575 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] exit Reset 0.000140 1 0.000185
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053318024s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079582214s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.147501 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053416252s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079689026s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.147572 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] exit Start 0.000033 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053367615s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079612732s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] exit Reset 0.000097 1 0.000211
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053251266s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079582214s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.640774 1 0.000129
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020318 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.143767 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.144724 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.279072 1 0.000256
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053354263s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079818726s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020503 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.145513 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.145549 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053389549s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079895020s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] exit Reset 0.000093 1 0.000132
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] exit Reset 0.000060 1 0.000092
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053203583s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079658508s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] exit Reset 0.000254 1 0.000296
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] exit Start 0.000034 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053297043s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079818726s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053352356s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079895020s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053241730s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079689026s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.402410 1 0.000153
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 2.020374 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 3.144575 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 3.144598 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] async=[0] r=0 lpr=54 pi=[49,54)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053125381s) [0] async=[0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 active pruub 94.079887390s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] exit Reset 0.000416 1 0.000690
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] exit Reset 0.000077 1 0.000589
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.053079605s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079887390s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] exit Start 0.000075 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57 pruub=14.052889824s) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 94.079658508s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.030506 7 0.000140
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000064 1 0.000064
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1c( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.247082 4 0.000062
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1c( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.124066 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1c( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.157366 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.032604 7 0.000136
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033063 7 0.000098
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000074 1 0.000046
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000080 1 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034776 7 0.000085
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000050 1 0.000039
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.16( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.252918 4 0.000061
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.16( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.129972 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.16( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.152699 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.256713 4 0.000045
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.133643 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.156523 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.264079 4 0.000033
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.141055 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.173662 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.271682 4 0.000089
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.148818 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1c( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.188995 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.278775 4 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.155940 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.178532 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.286550 4 0.000063
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.163806 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.204269 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.293595 4 0.000047
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.170910 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1b( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.204414 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.11( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.300831 4 0.000034
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.11( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.178146 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.11( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.218706 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.18( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.308171 4 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.18( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.185484 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.18( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [2] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.220282 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.10( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.315710 4 0.000063
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.10( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.192125 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.10( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.233975 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.322976 4 0.000080
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.199504 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.241576 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.330405 4 0.000047
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.206973 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.249138 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.10( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.337575 4 0.000069
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.10( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.214238 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.10( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.255746 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.345031 4 0.000041
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.221730 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.b( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.259372 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.4( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.352326 4 0.000046
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.4( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.229064 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.4( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.266895 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.359658 4 0.000055
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.236434 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.272463 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.367154 4 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.244022 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.283460 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.4( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.374202 4 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.4( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.251142 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.4( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.287794 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.18( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.381499 4 0.000038
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.18( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.258501 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.18( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.301072 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.9( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.388730 4 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.9( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.265749 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.9( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.302175 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.396060 4 0.000058
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.273164 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.c( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.309710 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.14( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.403446 4 0.000051
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.14( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.280623 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.14( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.323212 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.410763 4 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.285235 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.3( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.327925 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.9( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.418036 4 0.000042
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.9( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.292557 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.9( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.332567 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.425413 4 0.000053
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.299961 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.e( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.343750 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.432715 4 0.000055
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.307349 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.6( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.350926 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.6( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.440298 4 0.000051
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.6( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.314914 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.6( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.354143 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.447429 4 0.000050
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.322089 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.f( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.366428 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.3( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.454824 4 0.000040
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.3( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.329515 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.3( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.373770 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.e( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.462234 4 0.000041
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.e( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.336984 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.e( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.380126 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.469713 4 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.344520 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.17( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.371512 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.6( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.476913 4 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.6( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.351810 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.6( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.392273 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.484138 4 0.000054
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.359078 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.1( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.399190 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.491384 4 0.000042
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.366381 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.f( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.408071 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.498757 4 0.000031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.373800 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.413781 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.13( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.506066 4 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.13( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.381127 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.13( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.407829 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.513415 4 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.388549 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.15( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.425636 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.520959 4 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.396212 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.436595 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.528318 4 0.000071
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.403655 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.c( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.447304 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.19( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.535502 4 0.000078
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.19( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.410861 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.19( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.449725 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78217216 unmapped: 1548288 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871146 data_alloc: 218103808 data_used: 16452
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.542786 4 0.000044
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.418220 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1d( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.445740 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.18( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.550268 4 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.18( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.425782 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.18( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.453174 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.557856 4 0.000058
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.433451 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.471044 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.565065 4 0.000058
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.440672 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.1f( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.486895 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1a( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.572221 4 0.000070
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1a( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.447932 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.1a( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.486620 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.17( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.579767 4 0.000058
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.17( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.455447 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[11.17( empty lb MIN local-lis/les=51/52 n=0 ec=51/37 lis/c=51/51 les/c/f=52/52/0 sis=53) [0] r=-1 lpr=53 pi=[51,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.502370 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1b( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.587038 4 0.000042
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1b( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.462781 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[7.1b( empty lb MIN local-lis/les=47/48 n=0 ec=47/23 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.509582 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.594260 4 0.000064
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.470015 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[3.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/17 lis/c=43/43 les/c/f=44/44/0 sis=53) [0] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 4.508401 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.14( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.601443 4 0.000062
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.14( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 3.477299 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.14( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 unknown NOTIFY mbc={}] exit Started 4.524102 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.12( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.660706 5 0.000124
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.12( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete 3.487981 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.12( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [2] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started 4.579958 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.6( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.667979 5 0.000189
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.6( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete 2.129656 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.6( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=1 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started 4.583602 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 DELETING pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.682761 5 0.000184
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started/ToDelete 1.983793 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[8.f( v 32'6 (0'0,32'6] lb MIN local-lis/les=47/48 n=0 ec=47/31 lis/c=47/47 les/c/f=48/48/0 sis=53) [0] r=-1 lpr=53 pi=[47,53)/1 pct=0'0 crt=32'6 lcod 0'0 active mbc={}] exit Started 4.600137 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.484653 2 0.000214
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.484776 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.515344 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.512550 2 0.000184
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.512671 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.545320 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.542031 2 0.000107
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.542179 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.575316 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 DELETING pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.575603 2 0.000168
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.575769 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=-1 lpr=56 pi=[49,56)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.610610 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:26.681323+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 57 heartbeat osd_stat(store_statfs(0x4fcedf000/0x0/0x4ffc00000, data 0xbc571/0x14d000, compress 0x0/0x0/0x0, omap 0x9467, meta 0x2bc6b99), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 57 handle_osd_map epochs [58,58], i have 58, src has [1,58]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098329 6 0.000297
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098183 6 0.000173
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098573 6 0.000105
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098909 6 0.000102
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098911 6 0.000079
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] exit Started/Stray 1.099347 6 0.000156
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098365 6 0.000252
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.099929 6 0.000115
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.099627 6 0.000133
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.099391 6 0.000114
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.098956 6 0.000147
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.100525 6 0.000574
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000859 1 0.000106
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002129 1 0.000063
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002329 2 0.000030
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002520 2 0.000108
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.002743 2 0.000021
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003248 2 0.000098
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=50'484 lcod 55'485 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003356 2 0.000031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003408 2 0.000069
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003512 2 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003582 2 0.000101
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003560 2 0.000588
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.003513 2 0.000291
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.092262 3 0.000423
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.093361 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.191858 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.157671 3 0.000254
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.159862 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.258097 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1589248 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.209272 2 0.000267
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.211660 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.310632 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.231227 2 0.000320
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.233851 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.332518 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.290337 2 0.000352
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.293231 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.392177 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=55'486 lcod 55'485 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.356413 2 0.000273
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=55'486 lcod 55'485 unknown NOTIFY mbc={}] exit Started/ToDelete 0.359720 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=55'486 lcod 55'485 unknown NOTIFY mbc={}] exit Started 1.459164 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.423096 2 0.000143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.426533 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.525034 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.452315 2 0.000118
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.455797 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.555779 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.511561 2 0.000119
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.515132 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.614826 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.570774 2 0.000118
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.574419 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.673911 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.637705 2 0.000205
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.641400 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.740439 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 DELETING pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.689200 2 0.000114
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.692766 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] lb MIN local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=-1 lpr=57 pi=[49,57)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.793400 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:27.681874+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78176256 unmapped: 1589248 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:28.682003+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78077952 unmapped: 1687552 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:29.682230+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:04:58.703541+0000 osd.1 (osd.1) 14 : cluster [DBG] 11.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:04:58.713973+0000 osd.1 (osd.1) 15 : cluster [DBG] 11.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 58 heartbeat osd_stat(store_statfs(0x4fcef5000/0x0/0x4ffc00000, data 0xbd21a/0x135000, compress 0x0/0x0/0x0, omap 0x96ea, meta 0x2bc6916), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78110720 unmapped: 1654784 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 15)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:04:58.703541+0000 osd.1 (osd.1) 14 : cluster [DBG] 11.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:04:58.713973+0000 osd.1 (osd.1) 15 : cluster [DBG] 11.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:30.682728+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78118912 unmapped: 1646592 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 664579 data_alloc: 218103808 data_used: 12212
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:31.683119+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:00.749145+0000 osd.1 (osd.1) 16 : cluster [DBG] 7.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:00.759741+0000 osd.1 (osd.1) 17 : cluster [DBG] 7.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78127104 unmapped: 1638400 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 17)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:00.749145+0000 osd.1 (osd.1) 16 : cluster [DBG] 7.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:00.759741+0000 osd.1 (osd.1) 17 : cluster [DBG] 7.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:32.683464+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 10.216760 14 0.000140
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 10.312217 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 11.325834 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 11.325868 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707896233s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897071838s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] exit Reset 0.000110 1 0.000166
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59 pruub=13.707833290s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897071838s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 8.932058 11 0.000230
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 10.310878 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 11.327299 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 11.327338 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707865715s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897239685s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] exit Reset 0.000159 1 0.000223
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] exit Start 0.000016 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707767487s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897239685s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 8.864652 11 0.000122
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 10.310615 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 11.329051 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 11.329080 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707477570s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897384644s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] exit Reset 0.000054 1 0.000124
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707447052s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897384644s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 8.641348 11 0.000176
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 10.310128 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 11.330560 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 11.330608 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.707047462s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 active pruub 100.897460938s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] exit Reset 0.000220 1 0.000346
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] exit Start 0.000027 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 59 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59 pruub=13.706920624s) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY pruub 100.897460938s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 59 handle_osd_map epochs [59,59], i have 59, src has [1,59]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78135296 unmapped: 1630208 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=0 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000144 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=0 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000052 1 0.000070
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000250 1 0.000065
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.601312 6 0.000106
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.601825 6 0.000142
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=0 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000105 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=0 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000006 1 0.000018
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000078 1 0.000035
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.602454 6 0.000109
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.000807 2 0.000060
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.601509 6 0.000129
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetLog 0.001191 2 0.000030
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.072709 3 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.072773 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000164 1 0.000126
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:33.683633+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.131816 3 0.000035
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.131850 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000105 1 0.000088
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.198173 3 0.000065
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.198209 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000128 1 0.000113
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 DELETING pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.129864 2 0.000265
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.130075 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 0.805350 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 DELETING pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.152780 2 0.000153
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.152957 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 0.886165 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 DELETING pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.152002 2 0.000236
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.152219 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 0.952024 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.420158 3 0.000025
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.420184 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000110 1 0.000066
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-mon[75120]: pgmap v858: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 DELETING pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.024929 2 0.000280
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.025145 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=-1 lpr=59 pi=[53,59)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 1.047252 0 0.000000
Jan 20 19:27:23 compute-0 ceph-mon[75120]: from='client.14474 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 78192640 unmapped: 1572864 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 60 handle_osd_map epochs [60,61], i have 60, src has [1,61]
Jan 20 19:27:23 compute-0 ceph-mon[75120]: from='client.14478 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.022405 2 0.000109
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=4 mbc={}] exit Started/Primary/Peering 1.023990 0 0.000000
Jan 20 19:27:23 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/859625886' entity='client.admin' cmd={"prefix": "node ls"} : dispatch
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 unknown m=4 mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.023558 2 0.000057
Jan 20 19:27:23 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3285311335' entity='client.admin' cmd={"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} : dispatch
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.024722 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002404 3 0.000211
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000105 1 0.000090
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 lc 39'16 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.004288 3 0.000412
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.007845 3 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=60/61 n=1 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.006223 3 0.000055
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000004 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 lc 39'15 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=4 mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:34.683773+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.262984 1 0.000051
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000025 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=60/61 n=2 ec=45/22 lis/c=60/45 les/c/f=61/47/0 sis=60) [1] r=0 lpr=60 pi=[45,60)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79380480 unmapped: 385024 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:35.683950+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fcee9000/0x0/0x4ffc00000, data 0xc2707/0x13d000, compress 0x0/0x0/0x0, omap 0xa3ce, meta 0x2bc5c32), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 344064 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 669155 data_alloc: 218103808 data_used: 12212
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:36.684096+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.939429283s of 11.293741226s, submitted: 284
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79421440 unmapped: 344064 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 61 heartbeat osd_stat(store_statfs(0x4fcee8000/0x0/0x4ffc00000, data 0xc2ab3/0x13e000, compress 0x0/0x0/0x0, omap 0xa3ce, meta 0x2bc5c32), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:37.684224+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:06.871762+0000 osd.1 (osd.1) 18 : cluster [DBG] 8.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:06.882294+0000 osd.1 (osd.1) 19 : cluster [DBG] 8.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79462400 unmapped: 303104 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 19)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:06.871762+0000 osd.1 (osd.1) 18 : cluster [DBG] 8.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:06.882294+0000 osd.1 (osd.1) 19 : cluster [DBG] 8.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:38.684419+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:07.842816+0000 osd.1 (osd.1) 20 : cluster [DBG] 3.1c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:07.853415+0000 osd.1 (osd.1) 21 : cluster [DBG] 3.1c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 286720 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 21)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:07.842816+0000 osd.1 (osd.1) 20 : cluster [DBG] 3.1c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:07.853415+0000 osd.1 (osd.1) 21 : cluster [DBG] 3.1c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:39.684698+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79478784 unmapped: 286720 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 61 handle_osd_map epochs [61,62], i have 61, src has [1,62]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 16.479237 24 0.000205
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 17.588266 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 18.604017 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 18.604103 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 15.984634 21 0.000534
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430742264s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 active pruub 108.897323608s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] exit Reset 0.000192 1 0.000346
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] exit Start 0.000044 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62 pruub=14.430643082s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897323608s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 17.587705 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 18.605747 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 18.605821 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=53) [1] r=0 lpr=53 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430679321s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 active pruub 108.897682190s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] exit Reset 0.000171 1 0.000551
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] exit Start 0.000119 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 62 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62 pruub=14.430572510s) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY pruub 108.897682190s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:40.684905+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 62 heartbeat osd_stat(store_statfs(0x4fcee9000/0x0/0x4ffc00000, data 0xc47ed/0x141000, compress 0x0/0x0/0x0, omap 0xa650, meta 0x2bc59b0), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79486976 unmapped: 278528 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 677806 data_alloc: 218103808 data_used: 12212
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 1.025293 7 0.000319
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 1.026610 7 0.000179
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.071377 2 0.000374
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.071459 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000092 1 0.000098
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.196039 2 0.000105
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.196140 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000212 1 0.000234
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 DELETING pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.130071 2 0.000266
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.130234 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 1.227281 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 DELETING pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.027069 2 0.000221
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.027378 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=-1 lpr=62 pi=[53,62)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 1.250292 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:41.685121+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 23 sent 21 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:10.917494+0000 osd.1 (osd.1) 22 : cluster [DBG] 8.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:10.927996+0000 osd.1 (osd.1) 23 : cluster [DBG] 8.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 270336 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 23)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:10.917494+0000 osd.1 (osd.1) 22 : cluster [DBG] 8.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:10.927996+0000 osd.1 (osd.1) 23 : cluster [DBG] 8.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:42.685442+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.535138 43 0.001415
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.541382 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.541985 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.542176 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465168953s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.653533936s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] exit Reset 0.000093 1 0.000158
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.465118408s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653533936s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.536687 43 0.000208
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.541828 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.541887 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.541915 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463762283s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.653617859s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] exit Reset 0.000057 1 0.000123
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.463727951s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.653617859s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 64 handle_osd_map epochs [63,64], i have 64, src has [1,64]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.526382 43 0.000261
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.540643 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.541685 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.541788 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473500252s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.663787842s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] exit Reset 0.000143 1 0.000242
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.473434448s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.663787842s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 27.532734 43 0.000280
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 27.541615 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 27.541706 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 27.541742 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467473984s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 active pruub 109.658554077s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] exit Reset 0.000113 1 0.000264
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 64 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64 pruub=12.467450142s) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 109.658554077s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79495168 unmapped: 270336 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 64 handle_osd_map epochs [65,65], i have 64, src has [1,65]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.332648 3 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.332707 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.333137 3 0.000042
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.331953 3 0.000096
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.332012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000095 1 0.000139
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.333362 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.334859 3 0.000039
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.334898 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=64) [2] r=-1 lpr=64 pi=[49,64)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000158 1 0.000224
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000015 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000207 1 0.000229
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000228 1 0.000442
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000108 1 0.000117
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000081 1 0.000082
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000143 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000016 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000136 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000593 1 0.000624
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000011 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000041 1 0.000090
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000010 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 65 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:43.685584+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79503360 unmapped: 262144 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 65 handle_osd_map epochs [65,66], i have 65, src has [1,66]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.047315 4 0.000110
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.047549 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.048207 4 0.000180
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.048484 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 66 handle_osd_map epochs [66,66], i have 66, src has [1,66]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.048584 4 0.000107
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.048899 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.048574 4 0.000307
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.048998 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.005287 5 0.000253
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000092 1 0.000071
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.006281 5 0.000397
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.005244 5 0.000965
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.005922 5 0.000257
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000333 1 0.000025
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.044018 1 0.000024
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.043689 2 0.000072
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000611 1 0.000115
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.032571 2 0.000043
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.077289 1 0.000018
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000462 1 0.000056
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.059428 2 0.000061
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.137105 1 0.000120
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000274 1 0.000032
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045446 2 0.000044
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 66 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:44.685750+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79527936 unmapped: 237568 heap: 79765504 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.919329 1 0.000065
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003055 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.050642 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.050707 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.003064156s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577781677s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] exit Reset 0.000105 1 0.000197
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] exit Start 0.000017 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002995491s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577781677s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 67 handle_osd_map epochs [67,67], i have 67, src has [1,67]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.859335 1 0.000066
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.003032 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.051564 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.051602 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002538681s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577713013s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] exit Reset 0.000182 1 0.000707
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] exit Start 0.000049 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.002419472s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577713013s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.815168 1 0.000103
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004156 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.053096 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.053147 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001266479s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577674866s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] exit Reset 0.000154 1 0.000238
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] exit Start 0.000033 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.001171112s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.954672 1 0.000213
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.004342 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.053403 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.053456 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=65) [2]/[1] async=[2] r=0 lpr=65 pi=[49,65)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000796318s) [2] async=[2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 active pruub 114.577674866s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] exit Reset 0.000134 1 0.000241
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] exit Start 0.000043 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 67 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67 pruub=15.000699997s) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 114.577674866s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5614db4e1800
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5614dcfbe800
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5614dce11400
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 67 heartbeat osd_stat(store_statfs(0x4fcedf000/0x0/0x4ffc00000, data 0xcb793/0x14b000, compress 0x0/0x0/0x0, omap 0xb0f7, meta 0x2bc4f09), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:45.685897+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80535552 unmapped: 278528 heap: 80814080 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 685971 data_alloc: 218103808 data_used: 12676
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 67 handle_osd_map epochs [68,68], i have 67, src has [1,68]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012222 6 0.000311
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.012818 6 0.000157
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000221 1 0.000039
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014722 6 0.000143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.014301 6 0.000180
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000440 1 0.000037
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000935 2 0.000088
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001383 2 0.000029
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069710 3 0.000198
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.070019 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.1e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.082354 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.113842 3 0.000157
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.114333 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.6( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.127228 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dcfbe800 session 0x5614da7f7a40
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dce11400 session 0x5614dcd66e00
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614db4e1800 session 0x5614da809a40
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.142979 2 0.000402
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.143982 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.16( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=6 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.158822 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 DELETING pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.201534 2 0.000266
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.202967 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 68 pg[9.e( v 39'483 (0'0,39'483] lb MIN local-lis/les=65/66 n=7 ec=49/33 lis/c=65/49 les/c/f=66/50/0 sis=67) [2] r=-1 lpr=67 pi=[49,67)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.217373 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:46.686017+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 25 sent 23 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:16.002515+0000 osd.1 (osd.1) 24 : cluster [DBG] 11.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:16.013089+0000 osd.1 (osd.1) 25 : cluster [DBG] 11.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5614dbb6c800
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5614dd22ec00
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.677583694s of 10.201479912s, submitted: 85
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5614dc986400
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80855040 unmapped: 1007616 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dd22ec00 session 0x5614dcd0e380
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dc986400 session 0x5614dc2db340
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 68 ms_handle_reset con 0x5614dbb6c800 session 0x5614daf4f6c0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 25)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:16.002515+0000 osd.1 (osd.1) 24 : cluster [DBG] 11.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:16.013089+0000 osd.1 (osd.1) 25 : cluster [DBG] 11.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 68 handle_osd_map epochs [68,69], i have 68, src has [1,69]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:47.686253+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 69 heartbeat osd_stat(store_statfs(0x4fcedb000/0x0/0x4ffc00000, data 0xcea83/0x14b000, compress 0x0/0x0/0x0, omap 0xb627, meta 0x2bc49d9), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80453632 unmapped: 1409024 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 69 handle_osd_map epochs [69,70], i have 69, src has [1,70]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:48.686412+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1761280 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:49.686612+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 1744896 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:50.686834+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1720320 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 649669 data_alloc: 218103808 data_used: 11846
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 70 heartbeat osd_stat(store_statfs(0x4fced7000/0x0/0x4ffc00000, data 0xd1f75/0x151000, compress 0x0/0x0/0x0, omap 0xbb0e, meta 0x2bc44f2), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:51.686971+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80142336 unmapped: 1720320 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:52.687103+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 70 handle_osd_map epochs [71,71], i have 70, src has [1,71]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.534300 65 0.000257
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.543722 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 37.543771 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 37.543806 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.466033936s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 active pruub 117.657966614s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] exit Reset 0.000180 1 0.000135
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465988159s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 117.657966614s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active+clean] exit Started/Primary/Active/Clean 37.534183 65 0.000241
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary/Active 37.543151 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary 37.543198 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started 37.543227 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465965271s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 active pruub 117.658271790s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] exit Reset 0.000109 1 0.000156
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 71 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71 pruub=10.465903282s) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 117.658271790s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80125952 unmapped: 1736704 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 0.520754 3 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started 0.520825 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Reset 0.000132 1 0.000189
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000045 1 0.000050
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000036 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.521397 3 0.000136
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.521467 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=71) [2] r=-1 lpr=71 pi=[49,71)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000147 1 0.000222
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000050 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000053 1 0.000191
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000017 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 72 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:53.687262+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:23.085930+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.1e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:23.096489+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.1e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 79994880 unmapped: 1867776 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 72 handle_osd_map epochs [72,73], i have 72, src has [1,73]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 72 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.993032 4 0.000102
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.993173 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.992751 4 0.000154
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.992967 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 31.810121 58 0.000226
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 31.828012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 32.847414 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 32.847455 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=53) [1] r=0 lpr=53 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190241814s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 active pruub 116.897994995s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] exit Reset 0.000059 1 0.000106
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73 pruub=8.190208435s) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 116.897994995s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.002994 5 0.000273
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000143 1 0.000138
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000414 1 0.000072
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.005830 5 0.000272
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 73 handle_osd_map epochs [73,73], i have 73, src has [1,73]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 27)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:23.085930+0000 osd.1 (osd.1) 26 : cluster [DBG] 7.1e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:23.096489+0000 osd.1 (osd.1) 27 : cluster [DBG] 7.1e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.045743 2 0.000036
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.043349 1 0.000094
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000592 1 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052127 2 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 73 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:54.687453+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 1859584 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 73 handle_osd_map epochs [74,74], i have 73, src has [1,74]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.906353 1 0.000105
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.008493 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 2.001491 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 2.001614 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997115135s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 active pruub 124.712516785s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] exit Reset 0.000157 1 0.000219
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.997002602s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 124.712516785s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.959542 1 0.000100
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active 1.009121 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary 2.002313 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started 2.002340 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=72) [2]/[1] async=[2] r=0 lpr=72 pi=[49,72)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993810654s) [2] async=[2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 active pruub 124.709548950s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] exit Reset 0.000140 1 0.000193
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74 pruub=14.993714333s) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 124.709548950s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.016025 7 0.000098
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000101 1 0.000151
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 DELETING pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.004090 1 0.000027
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.004248 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=-1 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.020388 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:55.687606+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80003072 unmapped: 1859584 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662224 data_alloc: 218103808 data_used: 11846
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 1.055232 6 0.000105
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000189 1 0.000066
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055756 6 0.000188
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000880 1 0.000487
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] lb MIN local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 DELETING pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.054748 3 0.000352
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] lb MIN local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete 0.055026 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.18( v 68'487 (0'0,68'487] lb MIN local-lis/les=72/73 n=6 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started 1.110313 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] lb MIN local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 DELETING pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.105306 3 0.000229
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] lb MIN local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.106336 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 75 pg[9.8( v 39'483 (0'0,39'483] lb MIN local-lis/les=72/73 n=7 ec=49/33 lis/c=72/49 les/c/f=73/50/0 sis=74) [2] r=-1 lpr=74 pi=[49,74)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.162332 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:56.687749+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80027648 unmapped: 1835008 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcece000/0x0/0x4ffc00000, data 0xda796/0x15c000, compress 0x0/0x0/0x0, omap 0xc811, meta 0x2bc37ef), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:57.687923+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.866786003s of 10.976214409s, submitted: 53
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80044032 unmapped: 1818624 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:58.688116+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:28.049413+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:28.060057+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcece000/0x0/0x4ffc00000, data 0xda796/0x15c000, compress 0x0/0x0/0x0, omap 0xc811, meta 0x2bc37ef), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80076800 unmapped: 1785856 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcece000/0x0/0x4ffc00000, data 0xda796/0x15c000, compress 0x0/0x0/0x0, omap 0xc811, meta 0x2bc37ef), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 29)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:28.049413+0000 osd.1 (osd.1) 28 : cluster [DBG] 7.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:28.060057+0000 osd.1 (osd.1) 29 : cluster [DBG] 7.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:59.688431+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1777664 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:00.688693+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1777664 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 642583 data_alloc: 218103808 data_used: 11318
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:01.688824+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80084992 unmapped: 1777664 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 75 heartbeat osd_stat(store_statfs(0x4fcece000/0x0/0x4ffc00000, data 0xda796/0x15c000, compress 0x0/0x0/0x0, omap 0xc811, meta 0x2bc37ef), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 37.626592 62 0.000245
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 37.631950 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 38.625637 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 38.625770 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=55) [1] r=0 lpr=55 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.373511314s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 active pruub 127.006271362s@ mbc={}] PeeringState::start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] exit Reset 0.000755 1 0.001078
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] exit Start 0.000287 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 76 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76 pruub=10.372819901s) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 127.006271362s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 76 handle_osd_map epochs [76,76], i have 76, src has [1,76]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.227669 7 0.000471
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000095 1 0.000075
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 DELETING pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.002485 1 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002632 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=-1 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.230660 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:02.688940+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1761280 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=0 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000194 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=0 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000051 1 0.000107
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000355 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000218 1 0.000574
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.001040 2 0.000124
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 78 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:03.689089+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80101376 unmapped: 1761280 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 78 handle_osd_map epochs [78,79], i have 78, src has [1,79]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 79 handle_osd_map epochs [79,79], i have 79, src has [1,79]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010165 2 0.000049
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 1.011527 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.002210 3 0.000181
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000135 1 0.000085
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.010246 3 0.000174
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000023 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=78/79 n=1 ec=45/22 lis/c=78/59 les/c/f=79/60/0 sis=78) [1] r=0 lpr=78 pi=[59,78)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:04.689213+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80117760 unmapped: 1744896 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:05.689354+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:35.067020+0000 osd.1 (osd.1) 30 : cluster [DBG] 8.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:35.077543+0000 osd.1 (osd.1) 31 : cluster [DBG] 8.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1695744 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 661565 data_alloc: 218103808 data_used: 11318
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:06.689552+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 4 last_log 33 sent 31 num 4 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:36.064635+0000 osd.1 (osd.1) 32 : cluster [DBG] 7.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:36.075100+0000 osd.1 (osd.1) 33 : cluster [DBG] 7.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 31)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:35.067020+0000 osd.1 (osd.1) 30 : cluster [DBG] 8.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:35.077543+0000 osd.1 (osd.1) 31 : cluster [DBG] 8.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1695744 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:07.689720+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 4 last_log 35 sent 33 num 4 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:37.098630+0000 osd.1 (osd.1) 34 : cluster [DBG] 8.8 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:37.109177+0000 osd.1 (osd.1) 35 : cluster [DBG] 8.8 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 33)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:36.064635+0000 osd.1 (osd.1) 32 : cluster [DBG] 7.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:36.075100+0000 osd.1 (osd.1) 33 : cluster [DBG] 7.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 35)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:37.098630+0000 osd.1 (osd.1) 34 : cluster [DBG] 8.8 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:37.109177+0000 osd.1 (osd.1) 35 : cluster [DBG] 8.8 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 79 heartbeat osd_stat(store_statfs(0x4fcec4000/0x0/0x4ffc00000, data 0xe16a8/0x168000, compress 0x0/0x0/0x0, omap 0xd25d, meta 0x2bc2da3), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.854744911s of 10.015905380s, submitted: 22
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1695744 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:08.689909+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:38.065584+0000 osd.1 (osd.1) 36 : cluster [DBG] 8.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:38.076199+0000 osd.1 (osd.1) 37 : cluster [DBG] 8.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 37)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:38.065584+0000 osd.1 (osd.1) 36 : cluster [DBG] 8.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:38.076199+0000 osd.1 (osd.1) 37 : cluster [DBG] 8.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80175104 unmapped: 1687552 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:09.690095+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 54.519082 94 0.000653
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 54.524615 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 54.524897 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 54.524952 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=39'483 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481973648s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 active pruub 133.654129028s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] exit Reset 0.000167 1 0.000221
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.481848717s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 133.654129028s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active+clean] exit Started/Primary/Active/Clean 54.514614 94 0.000360
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary/Active 54.523269 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary 54.523331 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started 54.523365 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=49) [1] r=0 lpr=49 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485882759s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 active pruub 133.658874512s@ mbc={}] PeeringState::start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] exit Reset 0.000153 1 0.000262
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] exit Start 0.000069 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 80 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80 pruub=9.485768318s) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 133.658874512s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80183296 unmapped: 1679360 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:10.690246+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 80 handle_osd_map epochs [81,81], i have 80, src has [1,81]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.822653 3 0.000049
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.822710 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Reset 0.000156 1 0.000217
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 0.821990 3 0.000171
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started 0.822128 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=80) [2] r=-1 lpr=80 pi=[49,80)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] exit Start 0.000062 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Reset 0.000049 1 0.000070
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Start 0.000013 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000038 1 0.000051
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000045 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000125 1 0.000322
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000047 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 81 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80166912 unmapped: 1695744 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 675528 data_alloc: 218103808 data_used: 11318
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:11.690397+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:41.104789+0000 osd.1 (osd.1) 38 : cluster [DBG] 11.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:41.115037+0000 osd.1 (osd.1) 39 : cluster [DBG] 11.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 39)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:41.104789+0000 osd.1 (osd.1) 38 : cluster [DBG] 11.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:41.115037+0000 osd.1 (osd.1) 39 : cluster [DBG] 11.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.989848 4 0.000156
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.990161 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=49/50 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.991113 4 0.000091
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.991261 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=49/50 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80191488 unmapped: 1671168 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 82 handle_osd_map epochs [82,82], i have 82, src has [1,82]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.700619 5 0.000391
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000074 1 0.000057
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000689 1 0.000017
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.701437 5 0.000273
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.035448 2 0.000049
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.034448 1 0.000026
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000459 1 0.000049
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.066522 2 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 82 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:12.690583+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:42.128867+0000 osd.1 (osd.1) 40 : cluster [DBG] 8.3 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:42.139407+0000 osd.1 (osd.1) 41 : cluster [DBG] 8.3 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 41)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:42.128867+0000 osd.1 (osd.1) 40 : cluster [DBG] 8.3 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:42.139407+0000 osd.1 (osd.1) 41 : cluster [DBG] 8.3 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 82 handle_osd_map epochs [83,83], i have 82, src has [1,83]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.271201 1 0.000125
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary/Active 1.008334 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started/Primary 1.998564 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] exit Started 1.998698 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=39'483 lcod 0'0 mlcod 0'0 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.692206383s) [2] async=[2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 active pruub 142.686187744s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] exit Reset 0.000694 1 0.000672
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] exit Start 0.000055 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.691668510s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY pruub 142.686187744s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.205209 1 0.000127
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active 1.008322 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary 1.999598 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started 1.999626 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=81) [2]/[1] async=[2] r=0 lpr=81 pi=[49,81)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693123817s) [2] async=[2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 active pruub 142.688095093s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] exit Reset 0.000094 1 0.000121
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 83 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83 pruub=15.693052292s) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 142.688095093s@ mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 83 handle_osd_map epochs [83,83], i have 83, src has [1,83]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80224256 unmapped: 1638400 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 83 heartbeat osd_stat(store_statfs(0x4fceb4000/0x0/0x4ffc00000, data 0xe838f/0x174000, compress 0x0/0x0/0x0, omap 0xdca5, meta 0x2bc235b), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:13.690786+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.023649 7 0.000259
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000106 1 0.000083
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 1.024807 7 0.000195
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000105 1 0.000062
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] lb MIN local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 DELETING pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.046881 2 0.000269
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] lb MIN local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.047069 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.c( v 39'483 (0'0,39'483] lb MIN local-lis/les=81/82 n=7 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=39'483 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.070829 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 DELETING pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.112093 2 0.000251
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete 0.112262 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 84 pg[9.1c( v 68'487 (0'0,68'487] lb MIN local-lis/les=81/82 n=6 ec=49/33 lis/c=81/49 les/c/f=82/50/0 sis=83) [2] r=-1 lpr=83 pi=[49,83)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started 1.137116 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1523712 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:14.690919+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1523712 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:15.691051+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:45.056557+0000 osd.1 (osd.1) 42 : cluster [DBG] 8.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:45.067131+0000 osd.1 (osd.1) 43 : cluster [DBG] 8.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 43)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:45.056557+0000 osd.1 (osd.1) 42 : cluster [DBG] 8.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:45.067131+0000 osd.1 (osd.1) 43 : cluster [DBG] 8.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80338944 unmapped: 1523712 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 662153 data_alloc: 218103808 data_used: 10900
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=0 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000118 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=0 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000019 1 0.000040
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000169 1 0.000092
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.000666 2 0.000069
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 85 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:16.691275+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 85 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.167957 2 0.000079
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.168896 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002107 4 0.000369
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000138 1 0.000091
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000048 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067599 2 0.000253
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000049 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=85/86 n=1 ec=45/22 lis/c=85/62 les/c/f=86/63/0 sis=85) [1] r=0 lpr=85 pi=[62,85)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 86 heartbeat osd_stat(store_statfs(0x4fcead000/0x0/0x4ffc00000, data 0xed40b/0x17b000, compress 0x0/0x0/0x0, omap 0xe58c, meta 0x2bc1a74), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80355328 unmapped: 1507328 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:17.691421+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:47.051189+0000 osd.1 (osd.1) 44 : cluster [DBG] 8.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:47.065512+0000 osd.1 (osd.1) 45 : cluster [DBG] 8.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 45)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:47.051189+0000 osd.1 (osd.1) 44 : cluster [DBG] 8.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:47.065512+0000 osd.1 (osd.1) 45 : cluster [DBG] 8.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1499136 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.464494705s of 10.600705147s, submitted: 52
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:18.691627+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80363520 unmapped: 1499136 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:19.691842+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80371712 unmapped: 1490944 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:20.692043+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80379904 unmapped: 1482752 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 681384 data_alloc: 218103808 data_used: 10900
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:21.692180+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 80388096 unmapped: 1474560 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 88 heartbeat osd_stat(store_statfs(0x4fcea7000/0x0/0x4ffc00000, data 0xf0e7f/0x181000, compress 0x0/0x0/0x0, omap 0xea66, meta 0x2bc159a), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 88 handle_osd_map epochs [89,89], i have 88, src has [1,89]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 88 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:22.692319+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:52.141760+0000 osd.1 (osd.1) 46 : cluster [DBG] 3.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:52.152145+0000 osd.1 (osd.1) 47 : cluster [DBG] 3.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 89 heartbeat osd_stat(store_statfs(0x4fcea7000/0x0/0x4ffc00000, data 0xf0e7f/0x181000, compress 0x0/0x0/0x0, omap 0xea66, meta 0x2bc159a), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 417792 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 90 handle_osd_map epochs [90,91], i have 91, src has [1,91]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 47)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:52.141760+0000 osd.1 (osd.1) 46 : cluster [DBG] 3.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:52.152145+0000 osd.1 (osd.1) 47 : cluster [DBG] 3.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:23.692601+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:53.171229+0000 osd.1 (osd.1) 48 : cluster [DBG] 11.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:53.181737+0000 osd.1 (osd.1) 49 : cluster [DBG] 11.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 49)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:53.171229+0000 osd.1 (osd.1) 48 : cluster [DBG] 11.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:53.181737+0000 osd.1 (osd.1) 49 : cluster [DBG] 11.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:24.692795+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:25.692929+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 694316 data_alloc: 218103808 data_used: 11737
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:26.693049+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 360448 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:27.693186+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 91 handle_osd_map epochs [92,93], i have 91, src has [1,93]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:28.693344+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:58.136302+0000 osd.1 (osd.1) 50 : cluster [DBG] 3.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:58.146809+0000 osd.1 (osd.1) 51 : cluster [DBG] 3.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 51)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:58.136302+0000 osd.1 (osd.1) 50 : cluster [DBG] 3.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:58.146809+0000 osd.1 (osd.1) 51 : cluster [DBG] 3.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 93 heartbeat osd_stat(store_statfs(0x4fce9a000/0x0/0x4ffc00000, data 0xf973e/0x190000, compress 0x0/0x0/0x0, omap 0xf4a3, meta 0x2bc0b5d), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 93 handle_osd_map epochs [94,94], i have 94, src has [1,94]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.513738632s of 10.163047791s, submitted: 33
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 344064 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:29.693644+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:59.177175+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:05:59.187730+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 53)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:59.177175+0000 osd.1 (osd.1) 52 : cluster [DBG] 7.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:05:59.187730+0000 osd.1 (osd.1) 53 : cluster [DBG] 7.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 344064 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:30.694001+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 344064 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 708350 data_alloc: 218103808 data_used: 11737
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:31.694139+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 94 handle_osd_map epochs [95,96], i have 94, src has [1,96]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81494016 unmapped: 368640 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:32.694264+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 96 heartbeat osd_stat(store_statfs(0x4fce8d000/0x0/0x4ffc00000, data 0xfe7e0/0x199000, compress 0x0/0x0/0x0, omap 0xfa0a, meta 0x2bc05f6), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 96 handle_osd_map epochs [97,97], i have 96, src has [1,97]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 96 handle_osd_map epochs [97,97], i have 97, src has [1,97]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81502208 unmapped: 360448 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=0 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=0 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000024
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000152 1 0.000038
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000191 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 98 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.122774 2 0.000048
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.123095 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.123133 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=0 lpr=98 pi=[56,98)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000455 1 0.000621
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000061 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 99 pg[9.15( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 99 handle_osd_map epochs [99,99], i have 99, src has [1,99]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:33.694414+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:03.119529+0000 osd.1 (osd.1) 54 : cluster [DBG] 3.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:03.130195+0000 osd.1 (osd.1) 55 : cluster [DBG] 3.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 55)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:03.119529+0000 osd.1 (osd.1) 54 : cluster [DBG] 3.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:03.130195+0000 osd.1 (osd.1) 55 : cluster [DBG] 3.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 327680 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.009079 6 0.000238
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003586 3 0.000083
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000219 1 0.000043
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.028572 1 0.000109
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:34.694658+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:04.084386+0000 osd.1 (osd.1) 56 : cluster [DBG] 11.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:04.094968+0000 osd.1 (osd.1) 57 : cluster [DBG] 11.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81551360 unmapped: 311296 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 57)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:04.084386+0000 osd.1 (osd.1) 56 : cluster [DBG] 11.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:04.094968+0000 osd.1 (osd.1) 57 : cluster [DBG] 11.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 100 handle_osd_map epochs [101,101], i have 100, src has [1,101]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.985326 1 0.000069
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.017831 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.027097 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] r=-1 lpr=99 pi=[56,99)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000106 1 0.000157
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000047 1 0.000047
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002738 3 0.000052
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:35.694879+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:05.061229+0000 osd.1 (osd.1) 58 : cluster [DBG] 3.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:05.071766+0000 osd.1 (osd.1) 59 : cluster [DBG] 3.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 101 heartbeat osd_stat(store_statfs(0x4fce7e000/0x0/0x4ffc00000, data 0x105477/0x1a6000, compress 0x0/0x0/0x0, omap 0x1040e, meta 0x2bbfbf2), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81592320 unmapped: 270336 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 741966 data_alloc: 218103808 data_used: 12351
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 59)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:05.061229+0000 osd.1 (osd.1) 58 : cluster [DBG] 3.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:05.071766+0000 osd.1 (osd.1) 59 : cluster [DBG] 3.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004844 2 0.000098
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007704 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/56 les/c/f=102/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003662 4 0.000096
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/56 les/c/f=102/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/56 les/c/f=102/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/56 les/c/f=102/57/0 sis=101) [1] r=0 lpr=101 pi=[56,101)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:36.695258+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81600512 unmapped: 262144 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:37.695420+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81608704 unmapped: 253952 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:38.695596+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 103 handle_osd_map epochs [104,104], i have 103, src has [1,104]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.663535118s of 10.045524597s, submitted: 86
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fce72000/0x0/0x4ffc00000, data 0x10be30/0x1b2000, compress 0x0/0x0/0x0, omap 0x10e32, meta 0x2bbf1ce), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 188416 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:39.695908+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 180224 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:40.696065+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 180224 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 751680 data_alloc: 218103808 data_used: 12351
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 104 heartbeat osd_stat(store_statfs(0x4fce72000/0x0/0x4ffc00000, data 0x10be30/0x1b2000, compress 0x0/0x0/0x0, omap 0x10e32, meta 0x2bbf1ce), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:41.696183+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 131072 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:42.696255+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 131072 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:43.696445+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:13.124221+0000 osd.1 (osd.1) 60 : cluster [DBG] 7.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:13.134729+0000 osd.1 (osd.1) 61 : cluster [DBG] 7.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 105 handle_osd_map epochs [105,106], i have 106, src has [1,106]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 122880 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:44.696611+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 61)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:13.124221+0000 osd.1 (osd.1) 60 : cluster [DBG] 7.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:13.134729+0000 osd.1 (osd.1) 61 : cluster [DBG] 7.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81739776 unmapped: 122880 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:45.696769+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:15.115572+0000 osd.1 (osd.1) 62 : cluster [DBG] 8.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:15.126227+0000 osd.1 (osd.1) 63 : cluster [DBG] 8.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 63)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:15.115572+0000 osd.1 (osd.1) 62 : cluster [DBG] 8.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:15.126227+0000 osd.1 (osd.1) 63 : cluster [DBG] 8.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 106 handle_osd_map epochs [106,107], i have 106, src has [1,107]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 114688 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 765845 data_alloc: 218103808 data_used: 12351
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:46.696936+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:16.090311+0000 osd.1 (osd.1) 64 : cluster [DBG] 11.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:16.100854+0000 osd.1 (osd.1) 65 : cluster [DBG] 11.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 65)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:16.090311+0000 osd.1 (osd.1) 64 : cluster [DBG] 11.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:16.100854+0000 osd.1 (osd.1) 65 : cluster [DBG] 11.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 107 heartbeat osd_stat(store_statfs(0x4fce6f000/0x0/0x4ffc00000, data 0x111104/0x1bb000, compress 0x0/0x0/0x0, omap 0x115e2, meta 0x2bbea1e), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 107 handle_osd_map epochs [108,108], i have 107, src has [1,108]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 107 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81756160 unmapped: 106496 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:47.697128+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 108 heartbeat osd_stat(store_statfs(0x4fce6a000/0x0/0x4ffc00000, data 0x112b85/0x1be000, compress 0x0/0x0/0x0, omap 0x11876, meta 0x2bbe78a), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 108 handle_osd_map epochs [109,109], i have 108, src has [1,109]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 65536 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 109 heartbeat osd_stat(store_statfs(0x4fce69000/0x0/0x4ffc00000, data 0x114742/0x1c1000, compress 0x0/0x0/0x0, omap 0x11b0c, meta 0x2bbe4f4), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:48.697253+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:18.157835+0000 osd.1 (osd.1) 66 : cluster [DBG] 3.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:18.168420+0000 osd.1 (osd.1) 67 : cluster [DBG] 3.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 67)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:18.157835+0000 osd.1 (osd.1) 66 : cluster [DBG] 3.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:18.168420+0000 osd.1 (osd.1) 67 : cluster [DBG] 3.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 49152 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:49.697451+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.029823303s of 11.274053574s, submitted: 21
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 8192 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 109 handle_osd_map epochs [110,111], i have 109, src has [1,111]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:50.697601+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:20.148979+0000 osd.1 (osd.1) 68 : cluster [DBG] 8.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:20.159574+0000 osd.1 (osd.1) 69 : cluster [DBG] 8.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 69)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:20.148979+0000 osd.1 (osd.1) 68 : cluster [DBG] 8.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:20.159574+0000 osd.1 (osd.1) 69 : cluster [DBG] 8.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 32768 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 782787 data_alloc: 218103808 data_used: 12628
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:51.697823+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 111 heartbeat osd_stat(store_statfs(0x4fce61000/0x0/0x4ffc00000, data 0x117d4a/0x1c7000, compress 0x0/0x0/0x0, omap 0x11da4, meta 0x2bbe25c), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 24576 heap: 81862656 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 111 handle_osd_map epochs [111,112], i have 111, src has [1,112]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:52.697988+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:22.123994+0000 osd.1 (osd.1) 70 : cluster [DBG] 7.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:22.134537+0000 osd.1 (osd.1) 71 : cluster [DBG] 7.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 1048576 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 71)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:22.123994+0000 osd.1 (osd.1) 70 : cluster [DBG] 7.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:22.134537+0000 osd.1 (osd.1) 71 : cluster [DBG] 7.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:53.698215+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 1040384 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 113 handle_osd_map epochs [113,114], i have 113, src has [1,114]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:54.698420+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:24.188847+0000 osd.1 (osd.1) 72 : cluster [DBG] 11.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:24.199178+0000 osd.1 (osd.1) 73 : cluster [DBG] 11.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 114 heartbeat osd_stat(store_statfs(0x4fce59000/0x0/0x4ffc00000, data 0x11b367/0x1cd000, compress 0x0/0x0/0x0, omap 0x12257, meta 0x2bbdda9), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1032192 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 73)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:24.188847+0000 osd.1 (osd.1) 72 : cluster [DBG] 11.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:24.199178+0000 osd.1 (osd.1) 73 : cluster [DBG] 11.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:55.698613+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1032192 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 795501 data_alloc: 218103808 data_used: 12628
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:56.698737+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 1032192 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 114 handle_osd_map epochs [115,117], i have 114, src has [1,117]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:57.698931+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:27.132785+0000 osd.1 (osd.1) 74 : cluster [DBG] 7.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:27.143263+0000 osd.1 (osd.1) 75 : cluster [DBG] 7.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 950272 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 75)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:27.132785+0000 osd.1 (osd.1) 74 : cluster [DBG] 7.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:27.143263+0000 osd.1 (osd.1) 75 : cluster [DBG] 7.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f(unlocked)] enter Initial
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=0 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000147 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=0 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000037
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000213 1 0.000124
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000061 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000300 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 118 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:58.699175+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 933888 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 118 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.019044 2 0.000099
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.019444 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.019480 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=118) [1] r=0 lpr=118 pi=[69,118)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000085 1 0.000130
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 119 pg[9.1f( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:59.699492+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.645082474s of 10.000852585s, submitted: 20
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 966656 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:00.699676+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:30.147866+0000 osd.1 (osd.1) 76 : cluster [DBG] 3.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:30.158422+0000 osd.1 (osd.1) 77 : cluster [DBG] 3.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 119 heartbeat osd_stat(store_statfs(0x4fce4b000/0x0/0x4ffc00000, data 0x125608/0x1df000, compress 0x0/0x0/0x0, omap 0x12cd7, meta 0x2bbd329), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 119 handle_osd_map epochs [120,120], i have 119, src has [1,120]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 119 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.273404 5 0.000047
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=69/69 les/c/f=70/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002853 4 0.000121
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000072 1 0.000037
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.036220 1 0.000023
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 120 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 892928 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 829231 data_alloc: 218103808 data_used: 13771
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.737561 1 0.000059
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.776815 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.050251 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=119) [1]/[2] r=-1 lpr=119 pi=[69,119)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000118 1 0.000160
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001417 2 0.000047
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Jan 20 19:27:23 compute-0 ceph-osd[87071]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000631 2 0.000125
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000020 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 121 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 77)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:30.147866+0000 osd.1 (osd.1) 76 : cluster [DBG] 3.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:30.158422+0000 osd.1 (osd.1) 77 : cluster [DBG] 3.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:01.699841+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 79 sent 77 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:31.138536+0000 osd.1 (osd.1) 78 : cluster [DBG] 7.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:31.149175+0000 osd.1 (osd.1) 79 : cluster [DBG] 7.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 868352 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [121,122], i have 121, src has [1,122]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 121 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004884 2 0.000144
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.007074 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 79)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:31.138536+0000 osd.1 (osd.1) 78 : cluster [DBG] 7.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:31.149175+0000 osd.1 (osd.1) 79 : cluster [DBG] 7.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=119/69 les/c/f=120/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/69 les/c/f=122/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002333 4 0.000269
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/69 les/c/f=122/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/69 les/c/f=122/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 pg_epoch: 122 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=121/122 n=6 ec=49/33 lis/c=121/69 les/c/f=122/70/0 sis=121) [1] r=0 lpr=121 pi=[69,121)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:02.700091+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:32.168023+0000 osd.1 (osd.1) 80 : cluster [DBG] 8.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:32.182184+0000 osd.1 (osd.1) 81 : cluster [DBG] 8.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _renew_subs
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 794624 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 81)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:32.168023+0000 osd.1 (osd.1) 80 : cluster [DBG] 8.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:32.182184+0000 osd.1 (osd.1) 81 : cluster [DBG] 8.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:03.700281+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 786432 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:04.700426+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:34.212073+0000 osd.1 (osd.1) 82 : cluster [DBG] 3.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:34.222596+0000 osd.1 (osd.1) 83 : cluster [DBG] 3.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 786432 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce41000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 83)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:34.212073+0000 osd.1 (osd.1) 82 : cluster [DBG] 3.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:34.222596+0000 osd.1 (osd.1) 83 : cluster [DBG] 3.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:05.700600+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 917504 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840818 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:06.700738+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:36.193625+0000 osd.1 (osd.1) 84 : cluster [DBG] 7.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:36.204077+0000 osd.1 (osd.1) 85 : cluster [DBG] 7.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 909312 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 85)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:36.193625+0000 osd.1 (osd.1) 84 : cluster [DBG] 7.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:36.204077+0000 osd.1 (osd.1) 85 : cluster [DBG] 7.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:07.700972+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82001920 unmapped: 909312 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:08.701139+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 901120 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:09.701269+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 901120 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:10.701413+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82010112 unmapped: 901120 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 840818 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:11.701548+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 892928 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:12.701684+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 892928 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:13.701973+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.837522507s of 13.887688637s, submitted: 30
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 884736 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:14.702092+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:44.037568+0000 osd.1 (osd.1) 86 : cluster [DBG] 7.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:44.048196+0000 osd.1 (osd.1) 87 : cluster [DBG] 7.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 884736 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:15.702548+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 4 last_log 89 sent 87 num 4 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:45.067042+0000 osd.1 (osd.1) 88 : cluster [DBG] 3.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:45.077590+0000 osd.1 (osd.1) 89 : cluster [DBG] 3.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 87)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:44.037568+0000 osd.1 (osd.1) 86 : cluster [DBG] 7.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:44.048196+0000 osd.1 (osd.1) 87 : cluster [DBG] 7.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 89)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:45.067042+0000 osd.1 (osd.1) 88 : cluster [DBG] 3.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:45.077590+0000 osd.1 (osd.1) 89 : cluster [DBG] 3.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 876544 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845644 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:16.702834+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 868352 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:17.702963+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:47.130541+0000 osd.1 (osd.1) 90 : cluster [DBG] 11.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:47.140893+0000 osd.1 (osd.1) 91 : cluster [DBG] 11.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 91)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:47.130541+0000 osd.1 (osd.1) 90 : cluster [DBG] 11.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:47.140893+0000 osd.1 (osd.1) 91 : cluster [DBG] 11.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 860160 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:18.703129+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 851968 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:19.703613+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:49.118732+0000 osd.1 (osd.1) 92 : cluster [DBG] 8.1e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:49.129304+0000 osd.1 (osd.1) 93 : cluster [DBG] 8.1e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 93)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:49.118732+0000 osd.1 (osd.1) 92 : cluster [DBG] 8.1e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:49.129304+0000 osd.1 (osd.1) 93 : cluster [DBG] 8.1e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82059264 unmapped: 851968 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:20.703840+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:50.126787+0000 osd.1 (osd.1) 94 : cluster [DBG] 7.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:50.137249+0000 osd.1 (osd.1) 95 : cluster [DBG] 7.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 95)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:50.126787+0000 osd.1 (osd.1) 94 : cluster [DBG] 7.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:50.137249+0000 osd.1 (osd.1) 95 : cluster [DBG] 7.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 827392 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 852885 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:21.704031+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 827392 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:22.704157+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 819200 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:23.704290+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:53.128978+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.11 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:53.139538+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.11 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 97)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:53.128978+0000 osd.1 (osd.1) 96 : cluster [DBG] 5.11 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:53.139538+0000 osd.1 (osd.1) 97 : cluster [DBG] 5.11 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.002383232s of 10.074170113s, submitted: 12
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 819200 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:24.704467+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:54.111801+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:54.122406+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 99)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:54.111801+0000 osd.1 (osd.1) 98 : cluster [DBG] 2.17 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:54.122406+0000 osd.1 (osd.1) 99 : cluster [DBG] 2.17 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 811008 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:25.704705+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 802816 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 857711 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:26.704881+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 802816 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:27.705076+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 794624 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:28.705221+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 794624 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:29.705448+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:59.040991+0000 osd.1 (osd.1) 100 : cluster [DBG] 10.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:06:59.051513+0000 osd.1 (osd.1) 101 : cluster [DBG] 10.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 101)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:59.040991+0000 osd.1 (osd.1) 100 : cluster [DBG] 10.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:06:59.051513+0000 osd.1 (osd.1) 101 : cluster [DBG] 10.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 794624 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:30.705678+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 786432 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 860126 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:31.705820+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 786432 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:32.705953+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 778240 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:33.706077+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 778240 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:34.706220+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.908032417s of 10.916373253s, submitted: 4
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 770048 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:35.706350+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:05.028175+0000 osd.1 (osd.1) 102 : cluster [DBG] 2.15 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:05.038746+0000 osd.1 (osd.1) 103 : cluster [DBG] 2.15 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 761856 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 864952 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 103)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:05.028175+0000 osd.1 (osd.1) 102 : cluster [DBG] 2.15 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:05.038746+0000 osd.1 (osd.1) 103 : cluster [DBG] 2.15 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:36.706546+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:06.020869+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:06.031417+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 753664 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:37.706780+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 105)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:06.020869+0000 osd.1 (osd.1) 104 : cluster [DBG] 5.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:06.031417+0000 osd.1 (osd.1) 105 : cluster [DBG] 5.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 745472 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:38.706925+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:07.969802+0000 osd.1 (osd.1) 106 : cluster [DBG] 10.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:07.980416+0000 osd.1 (osd.1) 107 : cluster [DBG] 10.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 107)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:07.969802+0000 osd.1 (osd.1) 106 : cluster [DBG] 10.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:07.980416+0000 osd.1 (osd.1) 107 : cluster [DBG] 10.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 745472 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:39.707101+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 737280 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:40.707320+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 737280 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 869780 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:41.707546+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:10.935873+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:10.946413+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 109)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:10.935873+0000 osd.1 (osd.1) 108 : cluster [DBG] 5.16 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:10.946413+0000 osd.1 (osd.1) 109 : cluster [DBG] 5.16 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 720896 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:42.708042+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:11.944446+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:11.955069+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 111)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:11.944446+0000 osd.1 (osd.1) 110 : cluster [DBG] 2.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:11.955069+0000 osd.1 (osd.1) 111 : cluster [DBG] 2.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 720896 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:43.708311+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 720896 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:44.708442+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 712704 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:45.708585+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:14.888574+0000 osd.1 (osd.1) 112 : cluster [DBG] 10.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:14.898966+0000 osd.1 (osd.1) 113 : cluster [DBG] 10.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 113)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:14.888574+0000 osd.1 (osd.1) 112 : cluster [DBG] 10.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:14.898966+0000 osd.1 (osd.1) 113 : cluster [DBG] 10.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 712704 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874604 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:46.708773+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 704512 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:47.708927+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.685772896s of 12.770541191s, submitted: 12
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 704512 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:48.709062+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:17.798725+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:17.809241+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 696320 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 115)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:17.798725+0000 osd.1 (osd.1) 114 : cluster [DBG] 5.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:17.809241+0000 osd.1 (osd.1) 115 : cluster [DBG] 5.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:49.709332+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 688128 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:50.709632+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 688128 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 877017 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:51.709751+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 679936 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:52.709881+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 679936 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:53.710016+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 679936 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:54.710166+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 671744 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:55.710336+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:24.804755+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.3 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:24.815398+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.3 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 117)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:24.804755+0000 osd.1 (osd.1) 116 : cluster [DBG] 2.3 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:24.815398+0000 osd.1 (osd.1) 117 : cluster [DBG] 2.3 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 671744 heap: 82911232 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 881839 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:56.710660+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:25.763346+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:25.773778+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 119)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:25.763346+0000 osd.1 (osd.1) 118 : cluster [DBG] 2.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:25.773778+0000 osd.1 (osd.1) 119 : cluster [DBG] 2.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83296256 unmapped: 663552 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:57.710890+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 3 last_log 122 sent 119 num 3 unsent 3 sending 3
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:26.752296+0000 osd.1 (osd.1) 120 : cluster [DBG] 2.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:26.762869+0000 osd.1 (osd.1) 121 : cluster [DBG] 2.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:27.708705+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 122)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:26.752296+0000 osd.1 (osd.1) 120 : cluster [DBG] 2.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:26.762869+0000 osd.1 (osd.1) 121 : cluster [DBG] 2.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:27.708705+0000 osd.1 (osd.1) 122 : cluster [DBG] 5.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83312640 unmapped: 647168 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:58.711116+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 1 last_log 123 sent 122 num 1 unsent 1 sending 1
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:27.719307+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 123)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:27.719307+0000 osd.1 (osd.1) 123 : cluster [DBG] 5.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 638976 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:59.711322+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83320832 unmapped: 638976 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.860413551s of 12.898897171s, submitted: 10
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:00.711540+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:30.697636+0000 osd.1 (osd.1) 124 : cluster [DBG] 10.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:30.707625+0000 osd.1 (osd.1) 125 : cluster [DBG] 10.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 125)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:30.697636+0000 osd.1 (osd.1) 124 : cluster [DBG] 10.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:30.707625+0000 osd.1 (osd.1) 125 : cluster [DBG] 10.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 630784 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 889074 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:01.711865+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83329024 unmapped: 630784 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:02.712054+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83337216 unmapped: 622592 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:03.712166+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:33.682104+0000 osd.1 (osd.1) 126 : cluster [DBG] 2.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:33.692652+0000 osd.1 (osd.1) 127 : cluster [DBG] 2.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 127)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:33.682104+0000 osd.1 (osd.1) 126 : cluster [DBG] 2.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:33.692652+0000 osd.1 (osd.1) 127 : cluster [DBG] 2.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83345408 unmapped: 614400 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:04.712324+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:34.687657+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:34.698244+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 129)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:34.687657+0000 osd.1 (osd.1) 128 : cluster [DBG] 2.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:34.698244+0000 osd.1 (osd.1) 129 : cluster [DBG] 2.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83353600 unmapped: 606208 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:05.712508+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:35.643717+0000 osd.1 (osd.1) 130 : cluster [DBG] 10.11 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:35.654000+0000 osd.1 (osd.1) 131 : cluster [DBG] 10.11 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 131)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:35.643717+0000 osd.1 (osd.1) 130 : cluster [DBG] 10.11 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:35.654000+0000 osd.1 (osd.1) 131 : cluster [DBG] 10.11 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 598016 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 898724 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:06.712720+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:36.644403+0000 osd.1 (osd.1) 132 : cluster [DBG] 10.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:36.654408+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 133)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:36.644403+0000 osd.1 (osd.1) 132 : cluster [DBG] 10.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:36.654408+0000 osd.1 (osd.1) 133 : cluster [DBG] 10.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 598016 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:07.712899+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:37.629456+0000 osd.1 (osd.1) 134 : cluster [DBG] 5.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:37.640170+0000 osd.1 (osd.1) 135 : cluster [DBG] 5.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 135)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:37.629456+0000 osd.1 (osd.1) 134 : cluster [DBG] 5.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:37.640170+0000 osd.1 (osd.1) 135 : cluster [DBG] 5.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83361792 unmapped: 598016 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:08.713093+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:38.656596+0000 osd.1 (osd.1) 136 : cluster [DBG] 10.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:38.666665+0000 osd.1 (osd.1) 137 : cluster [DBG] 10.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 137)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:38.656596+0000 osd.1 (osd.1) 136 : cluster [DBG] 10.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:38.666665+0000 osd.1 (osd.1) 137 : cluster [DBG] 10.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 589824 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:09.713299+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:39.661769+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:39.672196+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 139)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:39.661769+0000 osd.1 (osd.1) 138 : cluster [DBG] 2.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:39.672196+0000 osd.1 (osd.1) 139 : cluster [DBG] 2.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83369984 unmapped: 589824 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:10.713542+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83378176 unmapped: 581632 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905961 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:11.713692+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83386368 unmapped: 573440 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:12.713819+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.782507896s of 12.041462898s, submitted: 16
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83394560 unmapped: 565248 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:13.713929+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:42.739130+0000 osd.1 (osd.1) 140 : cluster [DBG] 10.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:42.749684+0000 osd.1 (osd.1) 141 : cluster [DBG] 10.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 141)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:42.739130+0000 osd.1 (osd.1) 140 : cluster [DBG] 10.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:42.749684+0000 osd.1 (osd.1) 141 : cluster [DBG] 10.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 557056 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:14.714091+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83402752 unmapped: 557056 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:15.714189+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:44.787728+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:44.798325+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 143)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:44.787728+0000 osd.1 (osd.1) 142 : cluster [DBG] 5.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:44.798325+0000 osd.1 (osd.1) 143 : cluster [DBG] 5.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 548864 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 910787 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:16.714412+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83410944 unmapped: 548864 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:17.714552+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:18.714758+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83419136 unmapped: 540672 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:19.714891+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:48.734773+0000 osd.1 (osd.1) 144 : cluster [DBG] 5.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:48.745372+0000 osd.1 (osd.1) 145 : cluster [DBG] 5.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 524288 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:20.715308+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 145)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:48.734773+0000 osd.1 (osd.1) 144 : cluster [DBG] 5.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:48.745372+0000 osd.1 (osd.1) 145 : cluster [DBG] 5.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83435520 unmapped: 524288 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:21.715446+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:50.743840+0000 osd.1 (osd.1) 146 : cluster [DBG] 5.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:50.754425+0000 osd.1 (osd.1) 147 : cluster [DBG] 5.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 516096 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 915611 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 147)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:50.743840+0000 osd.1 (osd.1) 146 : cluster [DBG] 5.19 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:50.754425+0000 osd.1 (osd.1) 147 : cluster [DBG] 5.19 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:22.715653+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83443712 unmapped: 516096 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.002562523s of 10.019463539s, submitted: 8
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:23.715844+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:52.758609+0000 osd.1 (osd.1) 148 : cluster [DBG] 5.9 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:52.769178+0000 osd.1 (osd.1) 149 : cluster [DBG] 5.9 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 499712 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 149)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:52.758609+0000 osd.1 (osd.1) 148 : cluster [DBG] 5.9 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:52.769178+0000 osd.1 (osd.1) 149 : cluster [DBG] 5.9 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:24.716059+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83460096 unmapped: 499712 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:25.716167+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83468288 unmapped: 491520 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 2.1b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:26.716299+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 151 sent 149 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:55.744276+0000 osd.1 (osd.1) 150 : cluster [DBG] 2.1b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:07:55.754317+0000 osd.1 (osd.1) 151 : cluster [DBG] 2.1b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 483328 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920435 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 151)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:55.744276+0000 osd.1 (osd.1) 150 : cluster [DBG] 2.1b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:07:55.754317+0000 osd.1 (osd.1) 151 : cluster [DBG] 2.1b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:27.716572+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83476480 unmapped: 483328 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:28.716737+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 475136 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:29.716889+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 475136 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:30.717049+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83484672 unmapped: 475136 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:31.717204+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 466944 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 920435 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:32.717409+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83492864 unmapped: 466944 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.074139595s of 10.080884933s, submitted: 4
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:33.717556+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:02.839576+0000 osd.1 (osd.1) 152 : cluster [DBG] 5.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:02.850126+0000 osd.1 (osd.1) 153 : cluster [DBG] 5.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 458752 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 153)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:02.839576+0000 osd.1 (osd.1) 152 : cluster [DBG] 5.1d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:02.850126+0000 osd.1 (osd.1) 153 : cluster [DBG] 5.1d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:34.717817+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 458752 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:35.718073+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:04.896104+0000 osd.1 (osd.1) 154 : cluster [DBG] 10.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:04.906631+0000 osd.1 (osd.1) 155 : cluster [DBG] 10.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83501056 unmapped: 458752 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 155)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:04.896104+0000 osd.1 (osd.1) 154 : cluster [DBG] 10.13 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:04.906631+0000 osd.1 (osd.1) 155 : cluster [DBG] 10.13 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:36.718310+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 450560 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 925263 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:37.719201+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83509248 unmapped: 450560 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:38.719479+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83517440 unmapped: 442368 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:39.719627+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:08.896094+0000 osd.1 (osd.1) 156 : cluster [DBG] 4.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:08.906821+0000 osd.1 (osd.1) 157 : cluster [DBG] 4.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 157)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:08.896094+0000 osd.1 (osd.1) 156 : cluster [DBG] 4.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:08.906821+0000 osd.1 (osd.1) 157 : cluster [DBG] 4.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83525632 unmapped: 434176 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:40.719889+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 425984 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:41.720136+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 927674 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83533824 unmapped: 425984 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:42.720547+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.993028641s of 10.005455971s, submitted: 6
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83550208 unmapped: 409600 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:43.721051+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:12.844974+0000 osd.1 (osd.1) 158 : cluster [DBG] 4.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:12.855520+0000 osd.1 (osd.1) 159 : cluster [DBG] 4.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 159)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:12.844974+0000 osd.1 (osd.1) 158 : cluster [DBG] 4.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:12.855520+0000 osd.1 (osd.1) 159 : cluster [DBG] 4.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83558400 unmapped: 401408 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:44.721588+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 393216 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:45.721698+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83566592 unmapped: 393216 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:46.722414+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 930085 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 385024 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:47.722553+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83574784 unmapped: 385024 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:48.722673+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 376832 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83582976 unmapped: 376832 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:50.271725+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83599360 unmapped: 360448 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:51.271913+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:20.272532+0000 osd.1 (osd.1) 160 : cluster [DBG] 4.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:20.283012+0000 osd.1 (osd.1) 161 : cluster [DBG] 4.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 161)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:20.272532+0000 osd.1 (osd.1) 160 : cluster [DBG] 4.f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:20.283012+0000 osd.1 (osd.1) 161 : cluster [DBG] 4.f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 934907 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 352256 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:52.272124+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:21.301918+0000 osd.1 (osd.1) 162 : cluster [DBG] 4.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:21.312429+0000 osd.1 (osd.1) 163 : cluster [DBG] 4.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 163)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:21.301918+0000 osd.1 (osd.1) 162 : cluster [DBG] 4.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:21.312429+0000 osd.1 (osd.1) 163 : cluster [DBG] 4.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83607552 unmapped: 352256 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:53.272391+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 344064 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:54.272538+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.441165924s of 11.458217621s, submitted: 6
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83615744 unmapped: 344064 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:55.272681+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:24.303141+0000 osd.1 (osd.1) 164 : cluster [DBG] 4.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:24.313547+0000 osd.1 (osd.1) 165 : cluster [DBG] 4.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 165)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:24.303141+0000 osd.1 (osd.1) 164 : cluster [DBG] 4.7 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:24.313547+0000 osd.1 (osd.1) 165 : cluster [DBG] 4.7 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 335872 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:56.272875+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 937318 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 335872 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:57.273053+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83623936 unmapped: 335872 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:58.273210+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 327680 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:59.273435+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:28.295878+0000 osd.1 (osd.1) 166 : cluster [DBG] 4.9 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:28.306427+0000 osd.1 (osd.1) 167 : cluster [DBG] 4.9 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 167)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:28.295878+0000 osd.1 (osd.1) 166 : cluster [DBG] 4.9 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:28.306427+0000 osd.1 (osd.1) 167 : cluster [DBG] 4.9 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83632128 unmapped: 327680 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:00.273598+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 1 last_log 168 sent 167 num 1 unsent 1 sending 1
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:30.271040+0000 osd.1 (osd.1) 168 : cluster [DBG] 4.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 168)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:30.271040+0000 osd.1 (osd.1) 168 : cluster [DBG] 4.5 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 311296 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:01.273776+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 1 last_log 169 sent 168 num 1 unsent 1 sending 1
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:30.281562+0000 osd.1 (osd.1) 169 : cluster [DBG] 4.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 169)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:30.281562+0000 osd.1 (osd.1) 169 : cluster [DBG] 4.5 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944551 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 311296 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:02.273983+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:31.273993+0000 osd.1 (osd.1) 170 : cluster [DBG] 4.8 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:31.284545+0000 osd.1 (osd.1) 171 : cluster [DBG] 4.8 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 171)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:31.273993+0000 osd.1 (osd.1) 170 : cluster [DBG] 4.8 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:31.284545+0000 osd.1 (osd.1) 171 : cluster [DBG] 4.8 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83648512 unmapped: 311296 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:03.274299+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83656704 unmapped: 303104 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:04.274468+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83673088 unmapped: 286720 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:05.274608+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 278528 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:06.274781+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944551 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 278528 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:07.274954+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83681280 unmapped: 278528 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:08.275160+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 270336 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:09.275603+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83689472 unmapped: 270336 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:10.275777+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 262144 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:11.275980+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944551 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 262144 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:12.276237+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83697664 unmapped: 262144 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:13.276937+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 253952 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:14.277171+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83705856 unmapped: 253952 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:15.277706+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 237568 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:16.278164+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 944551 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83722240 unmapped: 237568 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 22.850078583s of 22.913881302s, submitted: 8
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:17.278691+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:47.217065+0000 osd.1 (osd.1) 172 : cluster [DBG] 5.18 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:47.227634+0000 osd.1 (osd.1) 173 : cluster [DBG] 5.18 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 173)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:47.217065+0000 osd.1 (osd.1) 172 : cluster [DBG] 5.18 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:47.227634+0000 osd.1 (osd.1) 173 : cluster [DBG] 5.18 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83730432 unmapped: 229376 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:18.279037+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83738624 unmapped: 221184 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:19.279268+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83746816 unmapped: 212992 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:20.279434+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 204800 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:21.279827+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946964 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83755008 unmapped: 204800 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:22.280015+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 196608 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:23.280094+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83763200 unmapped: 196608 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:24.280208+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83771392 unmapped: 188416 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:25.280452+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:55.256641+0000 osd.1 (osd.1) 174 : cluster [DBG] 4.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:55.266285+0000 osd.1 (osd.1) 175 : cluster [DBG] 4.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 175)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:55.256641+0000 osd.1 (osd.1) 174 : cluster [DBG] 4.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:55.266285+0000 osd.1 (osd.1) 175 : cluster [DBG] 4.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83779584 unmapped: 180224 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:26.280668+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 949377 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 172032 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:27.280835+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83787776 unmapped: 172032 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.956479073s of 10.965095520s, submitted: 4
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:28.281028+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:58.182279+0000 osd.1 (osd.1) 176 : cluster [DBG] 4.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:08:58.192908+0000 osd.1 (osd.1) 177 : cluster [DBG] 4.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 177)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:58.182279+0000 osd.1 (osd.1) 176 : cluster [DBG] 4.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:08:58.192908+0000 osd.1 (osd.1) 177 : cluster [DBG] 4.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 155648 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:29.281212+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83804160 unmapped: 155648 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:30.281337+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 147456 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:31.281501+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 951790 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 147456 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:32.281648+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 139264 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:33.281773+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83820544 unmapped: 139264 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:34.281915+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:04.198677+0000 osd.1 (osd.1) 178 : cluster [DBG] 4.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:04.209253+0000 osd.1 (osd.1) 179 : cluster [DBG] 4.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 179)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:04.198677+0000 osd.1 (osd.1) 178 : cluster [DBG] 4.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:04.209253+0000 osd.1 (osd.1) 179 : cluster [DBG] 4.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 131072 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:35.282101+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 131072 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:36.282249+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954203 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 131072 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:37.282457+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83836928 unmapped: 122880 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:38.282610+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83845120 unmapped: 114688 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:39.282821+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 106496 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:40.282958+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83853312 unmapped: 106496 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:41.283202+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 954203 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83861504 unmapped: 98304 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:42.283324+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83869696 unmapped: 90112 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:43.283562+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.974168777s of 15.981106758s, submitted: 4
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83877888 unmapped: 81920 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:44.283771+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:14.163410+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:14.177440+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 181)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:14.163410+0000 osd.1 (osd.1) 180 : cluster [DBG] 10.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:14.177440+0000 osd.1 (osd.1) 181 : cluster [DBG] 10.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 73728 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:45.284023+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83886080 unmapped: 73728 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:46.284199+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 956618 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 65536 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:47.284390+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 10.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 65536 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:48.284537+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:18.169266+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:18.183412+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 183)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:18.169266+0000 osd.1 (osd.1) 182 : cluster [DBG] 10.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:18.183412+0000 osd.1 (osd.1) 183 : cluster [DBG] 10.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 57344 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:49.284819+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:19.135009+0000 osd.1 (osd.1) 184 : cluster [DBG] 6.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:19.145560+0000 osd.1 (osd.1) 185 : cluster [DBG] 6.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 185)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:19.135009+0000 osd.1 (osd.1) 184 : cluster [DBG] 6.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:19.145560+0000 osd.1 (osd.1) 185 : cluster [DBG] 6.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 57344 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:50.285118+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 57344 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:51.285276+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 187 sent 185 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:21.191160+0000 osd.1 (osd.1) 186 : cluster [DBG] 6.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:21.205292+0000 osd.1 (osd.1) 187 : cluster [DBG] 6.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963855 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 49152 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:52.285431+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 4 last_log 189 sent 187 num 4 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:22.144522+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:22.169230+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 187)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:21.191160+0000 osd.1 (osd.1) 186 : cluster [DBG] 6.6 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:21.205292+0000 osd.1 (osd.1) 187 : cluster [DBG] 6.6 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 49152 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:53.285594+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 189)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:22.144522+0000 osd.1 (osd.1) 188 : cluster [DBG] 6.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:22.169230+0000 osd.1 (osd.1) 189 : cluster [DBG] 6.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 40960 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:54.285745+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 40960 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:55.285881+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 40960 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:56.286007+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966266 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 32768 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:57.286150+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.788084984s of 13.944467545s, submitted: 10
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 32768 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:58.286273+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:28.107881+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:28.125292+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 191)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:28.107881+0000 osd.1 (osd.1) 190 : cluster [DBG] 6.d scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:28.125292+0000 osd.1 (osd.1) 191 : cluster [DBG] 6.d scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 16384 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:59.286428+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 16384 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:00.287485+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:30.165809+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:30.179990+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 193)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:30.165809+0000 osd.1 (osd.1) 192 : cluster [DBG] 6.e scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:30.179990+0000 osd.1 (osd.1) 193 : cluster [DBG] 6.e scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 16384 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:01.287706+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971088 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 0 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:02.287907+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:32.208981+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:32.219566+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 195)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:32.208981+0000 osd.1 (osd.1) 194 : cluster [DBG] 6.1 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:32.219566+0000 osd.1 (osd.1) 195 : cluster [DBG] 6.1 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83959808 unmapped: 0 heap: 83959808 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:03.288114+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 1032192 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:04.288254+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:34.189685+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:34.203801+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 197)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:34.189685+0000 osd.1 (osd.1) 196 : cluster [DBG] 6.c scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:34.203801+0000 osd.1 (osd.1) 197 : cluster [DBG] 6.c scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1024000 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:05.288443+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 1024000 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:06.288589+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975910 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1015808 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:07.288700+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 1015808 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:08.288828+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.056145668s of 11.071456909s, submitted: 8
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 6.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1007616 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:09.288983+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:39.179439+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:39.193453+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 199)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:39.179439+0000 osd.1 (osd.1) 198 : cluster [DBG] 6.b scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:39.193453+0000 osd.1 (osd.1) 199 : cluster [DBG] 6.b scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 1007616 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:10.289188+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 999424 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:11.289340+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:41.121529+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.15 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:41.146148+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.15 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 201)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:41.121529+0000 osd.1 (osd.1) 200 : cluster [DBG] 9.15 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:41.146148+0000 osd.1 (osd.1) 201 : cluster [DBG] 9.15 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 980734 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 999424 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:12.289512+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 999424 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:13.289614+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 991232 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:14.289744+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:15.289856+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:45.100125+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:45.138983+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 991232 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 203)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:45.100125+0000 osd.1 (osd.1) 202 : cluster [DBG] 9.14 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:45.138983+0000 osd.1 (osd.1) 203 : cluster [DBG] 9.14 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:16.290044+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 983040 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983147 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:17.290174+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 983040 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:18.290332+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 983040 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:19.290463+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 974848 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:20.290594+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 974848 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:21.290743+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 966656 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 983147 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:22.290873+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 966656 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:23.291020+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 966656 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:24.291177+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 950272 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.651216507s of 15.828714371s, submitted: 6
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:25.291299+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:55.008197+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:55.025804+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 950272 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 205)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:55.008197+0000 osd.1 (osd.1) 204 : cluster [DBG] 9.10 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:55.025804+0000 osd.1 (osd.1) 205 : cluster [DBG] 9.10 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:26.291486+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:56.007227+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:09:56.035429+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 925696 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 207)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:56.007227+0000 osd.1 (osd.1) 206 : cluster [DBG] 9.12 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:09:56.035429+0000 osd.1 (osd.1) 207 : cluster [DBG] 9.12 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987973 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:27.291699+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 925696 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:28.291823+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 917504 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:29.291945+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 917504 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:30.292054+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 209 sent 207 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:00.032152+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:00.074534+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 868352 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 209)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:00.032152+0000 osd.1 (osd.1) 208 : cluster [DBG] 9.2 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:00.074534+0000 osd.1 (osd.1) 209 : cluster [DBG] 9.2 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:31.292232+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 868352 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 992795 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:32.292342+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 211 sent 209 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:01.981901+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:02.031325+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 860160 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 211)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:01.981901+0000 osd.1 (osd.1) 210 : cluster [DBG] 9.0 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:02.031325+0000 osd.1 (osd.1) 211 : cluster [DBG] 9.0 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:33.292870+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 213 sent 211 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:02.994189+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:03.036538+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 860160 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 213)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:02.994189+0000 osd.1 (osd.1) 212 : cluster [DBG] 9.a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:03.036538+0000 osd.1 (osd.1) 213 : cluster [DBG] 9.a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:34.293031+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 851968 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.020460129s of 10.043588638s, submitted: 10
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:35.293176+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 215 sent 213 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:05.051809+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:05.076582+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 851968 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 215)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:05.051809+0000 osd.1 (osd.1) 214 : cluster [DBG] 9.1a scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:05.076582+0000 osd.1 (osd.1) 215 : cluster [DBG] 9.1a scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:36.293397+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 217 sent 215 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:06.018782+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:06.064681+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 827392 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 217)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:06.018782+0000 osd.1 (osd.1) 216 : cluster [DBG] 9.4 scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:06.064681+0000 osd.1 (osd.1) 217 : cluster [DBG] 9.4 scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1000030 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:37.293559+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84180992 unmapped: 827392 heap: 85008384 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_channel(cluster) log [DBG] : 9.1f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:38.293701+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  log_queue is 2 last_log 219 sent 217 num 2 unsent 2 sending 2
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:08.008816+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.1f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  will send 2026-01-20T19:10:08.037106+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.1f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1867776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client handle_log_ack log(last 219)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:08.008816+0000 osd.1 (osd.1) 218 : cluster [DBG] 9.1f scrub starts
Jan 20 19:27:23 compute-0 ceph-osd[87071]: log_client  logged 2026-01-20T19:10:08.037106+0000 osd.1 (osd.1) 219 : cluster [DBG] 9.1f scrub ok
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:39.293882+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1867776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:40.294067+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 1867776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:41.294253+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1859584 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:42.294391+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84197376 unmapped: 1859584 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:43.294524+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 1851392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:44.294656+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84205568 unmapped: 1851392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:45.294786+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 1835008 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:46.294933+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 1835008 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:47.295078+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84221952 unmapped: 1835008 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:48.295219+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84230144 unmapped: 1826816 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:49.295341+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84238336 unmapped: 1818624 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:50.295483+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 1810432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:51.295645+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 1810432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:52.295810+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84246528 unmapped: 1810432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:53.295932+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 1802240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:54.296061+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84254720 unmapped: 1802240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:55.296216+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1794048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:56.296395+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1794048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:57.296535+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84262912 unmapped: 1794048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:58.296665+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84271104 unmapped: 1785856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:59.296791+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1777664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:00.296945+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1777664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:01.297132+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84279296 unmapped: 1777664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:02.297251+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1769472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:03.297431+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1769472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:04.297572+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84287488 unmapped: 1769472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:05.297743+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1761280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:06.297879+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1761280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:07.297997+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:08.298123+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:09.298271+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1744896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:10.298409+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1736704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:11.298548+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1736704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:12.298756+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:13.298891+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:14.299010+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:15.299125+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 1712128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:16.299286+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 1712128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:17.299458+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:18.299609+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:19.299723+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 1695744 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:20.299841+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 1687552 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:21.299999+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 1687552 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:22.300113+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1679360 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:23.300220+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1679360 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:24.300395+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:25.300505+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:26.300667+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:27.300931+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:28.301250+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:29.301483+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1654784 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:30.301717+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1654784 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:31.301891+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1646592 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:32.302105+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1646592 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:33.302242+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1638400 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:34.302378+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1761280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:35.302496+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84295680 unmapped: 1761280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:36.302716+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:37.302835+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:38.303033+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84303872 unmapped: 1753088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:39.303172+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1744896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:40.303336+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84312064 unmapped: 1744896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:41.303525+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1736704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:42.303686+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84320256 unmapped: 1736704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:43.303815+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:44.303998+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84328448 unmapped: 1728512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:45.304143+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1720320 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:46.304274+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1720320 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:47.304413+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84336640 unmapped: 1720320 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:48.304605+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 1712128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:49.304733+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84344832 unmapped: 1712128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:50.304875+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:51.305083+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:52.305213+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84353024 unmapped: 1703936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:53.305347+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84361216 unmapped: 1695744 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:54.305493+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84369408 unmapped: 1687552 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:55.305632+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1679360 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:56.305757+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84377600 unmapped: 1679360 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:57.306079+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:58.306210+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:59.306347+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 1671168 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:00.306490+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:01.306655+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84393984 unmapped: 1662976 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:02.306808+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1654784 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:03.306930+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84402176 unmapped: 1654784 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:04.307051+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84410368 unmapped: 1646592 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:05.307189+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1638400 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:06.307306+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1638400 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:07.307442+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84418560 unmapped: 1638400 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:08.307583+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1630208 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:09.307712+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84426752 unmapped: 1630208 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:10.307870+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 1622016 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:11.308035+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84434944 unmapped: 1622016 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:12.308146+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 1613824 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:13.308264+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84443136 unmapped: 1613824 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:14.308405+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1605632 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:15.308523+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84451328 unmapped: 1605632 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:16.308660+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1597440 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:17.308801+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1597440 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:18.309032+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84459520 unmapped: 1597440 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:19.309290+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84467712 unmapped: 1589248 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:20.309429+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84475904 unmapped: 1581056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:21.309593+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1572864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:22.309742+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1572864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:23.310036+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 1572864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:24.310273+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 1556480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:25.310982+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 1556480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:26.311152+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1548288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:27.311745+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1548288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:28.312157+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 1548288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:29.312382+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 1540096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:30.312580+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 1531904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:31.313065+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 1523712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:32.313188+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 1523712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:33.313304+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 1523712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:34.313713+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1515520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:35.314003+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 1515520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:36.314471+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1507328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:37.314586+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 1507328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:38.314705+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1499136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:39.314837+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1499136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:40.314993+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 1499136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:41.315214+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1490944 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:42.315841+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 1490944 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:43.316003+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1482752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:44.316130+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1482752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:45.316325+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 1482752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:46.316458+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1474560 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:47.316761+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1474560 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:48.316987+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 1474560 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:49.317151+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1466368 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:50.317386+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 1466368 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:51.317614+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1458176 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:52.317736+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 1458176 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:53.317849+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1449984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:54.317962+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1449984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:55.318127+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 1449984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:56.318274+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 1441792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:57.318520+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 1441792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:58.318679+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 1433600 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:59.318817+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 1433600 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:00.318943+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1425408 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:01.319128+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1425408 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:02.319261+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 1425408 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:03.319456+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 1417216 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:04.319650+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1409024 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:05.319897+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 1409024 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:06.320021+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1400832 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:07.320171+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 1400832 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:08.320469+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1392640 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:09.320712+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1392640 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:10.320859+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 1392640 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:11.321153+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1384448 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:12.321292+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 1384448 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:13.321436+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 1376256 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:14.321595+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 1376256 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:15.321721+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1368064 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:16.321901+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 1368064 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:17.322222+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1359872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:18.322431+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1359872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:19.322576+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 1359872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:20.322729+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 1351680 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:21.322885+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 1351680 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:22.323058+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1343488 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:23.323195+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1343488 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:24.323277+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 1343488 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:25.323410+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 1335296 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:26.323658+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 1335296 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:27.323800+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1327104 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:28.323953+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1327104 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:29.324140+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 1327104 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:30.324287+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 1318912 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:31.324470+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84738048 unmapped: 1318912 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:32.324642+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 1310720 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:33.324777+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 1310720 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:34.326243+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1302528 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:35.326406+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1302528 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:36.326539+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 1302528 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:37.326749+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1294336 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:38.327155+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 1294336 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:39.327542+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1286144 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:40.327831+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1286144 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:41.328106+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 1286144 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:42.328311+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1277952 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:43.328498+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 1277952 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:44.328676+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1269760 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:45.328920+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1269760 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:46.329105+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 1269760 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:47.329238+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:48.329406+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 1261568 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:49.329559+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 1261568 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:50.329722+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1253376 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:51.330161+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1253376 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:52.330329+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 1253376 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:53.330488+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 1245184 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:54.330639+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 1245184 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:55.330951+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1236992 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:56.331078+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1236992 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:57.331285+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84819968 unmapped: 1236992 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:58.331451+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1228800 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:59.331586+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 1228800 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:00.331719+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1220608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:01.331909+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1220608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:02.332072+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 1220608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:03.332199+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 1212416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:04.332386+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 1212416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:05.332537+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 1204224 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Cumulative writes: 6904 writes, 28K keys, 6904 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 6904 writes, 1315 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 6904 writes, 28K keys, 6904 commit groups, 1.0 writes per commit group, ingest: 19.80 MB, 0.03 MB/s
                                           Interval WAL: 6904 writes, 1315 syncs, 5.25 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 6e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 2.9e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:06.332678+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84934656 unmapped: 1122304 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:07.332790+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1114112 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:08.332908+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1114112 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:09.333055+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84942848 unmapped: 1114112 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:10.333221+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1105920 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:11.333427+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 1105920 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:12.333555+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 1097728 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:13.333728+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84959232 unmapped: 1097728 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:14.333884+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84967424 unmapped: 1089536 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:15.334055+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 1081344 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:16.334214+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84975616 unmapped: 1081344 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:17.334338+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 1073152 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:18.334491+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84983808 unmapped: 1073152 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:19.334674+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 1064960 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:20.334815+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 84992000 unmapped: 1064960 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:21.334991+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 1056768 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:22.335150+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85000192 unmapped: 1056768 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:23.335343+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 1048576 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:24.335533+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 1048576 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:25.335665+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85008384 unmapped: 1048576 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:26.335792+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:27.335933+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:28.336100+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:29.336287+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:30.336441+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85016576 unmapped: 1040384 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:31.336616+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 1032192 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:32.336759+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85024768 unmapped: 1032192 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:33.336866+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85032960 unmapped: 1024000 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:34.336990+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:35.337112+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:36.337264+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:37.337461+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:38.337676+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 999424 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:39.337906+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 999424 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:40.338054+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:41.338276+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:42.338472+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:43.338616+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:44.338761+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:45.338935+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 974848 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:46.339061+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 974848 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:47.339216+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:48.339418+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:49.339572+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:50.339785+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:51.339944+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:52.340101+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 950272 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 257.986907959s of 257.997436523s, submitted: 6
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:53.340230+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 942080 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:54.340437+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:55.340550+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:56.340671+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:57.340801+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:58.340897+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:59.341078+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:00.341225+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:01.341428+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:02.341913+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:03.342709+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85041152 unmapped: 1015808 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:04.342833+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:05.342959+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:06.343077+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85049344 unmapped: 1007616 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:07.343234+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 999424 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:08.343416+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85057536 unmapped: 999424 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:09.343593+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:10.343724+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85065728 unmapped: 991232 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:11.343882+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:12.344027+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:13.344147+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85073920 unmapped: 983040 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:14.344278+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85082112 unmapped: 974848 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:15.344420+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:16.344600+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:17.344752+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85090304 unmapped: 966656 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:18.344936+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:19.345095+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:20.345243+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85098496 unmapped: 958464 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:21.345461+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 950272 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:22.345613+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85106688 unmapped: 950272 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:23.345763+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 942080 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:24.345897+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 942080 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:25.346031+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85114880 unmapped: 942080 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:26.346162+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 933888 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:27.346318+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85123072 unmapped: 933888 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:28.346464+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 925696 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:29.346608+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 925696 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:30.346796+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85131264 unmapped: 925696 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:31.347029+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 917504 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:32.347163+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85139456 unmapped: 917504 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:33.347955+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 909312 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:34.348075+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85147648 unmapped: 909312 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:35.348191+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 892928 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:36.348308+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 892928 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:37.348421+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85164032 unmapped: 892928 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:38.348563+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 884736 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:39.348696+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85172224 unmapped: 884736 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:40.348781+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 876544 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:41.348968+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85180416 unmapped: 876544 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:42.349109+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 868352 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:43.349248+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 868352 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:44.349396+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85188608 unmapped: 868352 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:45.349530+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 860160 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:46.349658+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85196800 unmapped: 860160 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:47.349789+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 851968 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:48.349933+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 851968 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:49.350067+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85204992 unmapped: 851968 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:50.350184+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 843776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:51.350339+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 843776 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:52.350477+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 835584 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:53.350623+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85221376 unmapped: 835584 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:54.350733+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 827392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:55.350860+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 827392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:56.351004+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85229568 unmapped: 827392 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:57.351142+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 819200 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:58.351281+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 819200 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:59.351415+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85237760 unmapped: 819200 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:00.351548+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 802816 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:01.351715+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85254144 unmapped: 802816 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:02.351840+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 794624 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:03.351969+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85262336 unmapped: 794624 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:04.352127+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 786432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:05.352268+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85270528 unmapped: 786432 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:06.352398+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:07.352539+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:08.352748+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:09.352885+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:10.353075+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:11.353238+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:12.353460+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:13.353606+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:14.353754+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:15.353940+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:16.354058+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:17.354174+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:18.354287+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:19.354422+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:20.354539+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:21.354689+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:22.354811+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:23.354930+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:24.355052+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85278720 unmapped: 778240 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:25.355170+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:26.355446+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:27.355788+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:28.355964+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:29.356085+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:30.356213+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:31.356411+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:32.356551+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:33.356658+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:34.356783+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:35.356923+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:36.357063+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:37.357204+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85286912 unmapped: 770048 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:38.357334+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:39.357443+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:40.357574+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:41.357743+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:42.357871+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:43.357998+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:44.358127+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:45.358253+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:46.358525+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:47.358661+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:48.358825+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:49.358979+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:50.359102+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:51.359257+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:52.359416+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:53.359532+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:54.359660+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:55.359787+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:56.359957+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:57.360133+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:58.360296+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:59.360414+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:00.360531+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:01.360693+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:02.360809+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:03.360927+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85295104 unmapped: 761856 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:04.361348+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 753664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:05.361521+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 753664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:06.361631+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 753664 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:07.361748+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:08.361877+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:09.362007+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:10.362124+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:11.362286+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:12.362439+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:13.362621+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:14.362776+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:15.362951+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:16.363085+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:17.363222+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:18.363464+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:19.363618+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:20.363765+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:21.363976+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:22.364118+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:23.364238+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:24.364420+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:25.364563+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 745472 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:26.364702+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:27.364832+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:28.364958+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:29.365103+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:30.365327+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:31.366136+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:32.366419+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:33.366601+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:34.366877+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:35.367095+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:36.367263+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:37.367477+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:38.367671+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 737280 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:39.367852+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:40.368073+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:41.368309+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:42.368463+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:43.368662+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:44.368855+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:45.369141+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:46.369356+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:47.369581+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:48.369765+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:49.369947+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:50.370088+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:51.370271+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:52.370460+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:53.370624+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85327872 unmapped: 729088 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:54.370752+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:55.370880+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:56.371054+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:57.371207+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:58.371422+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:59.371540+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:00.371658+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:01.371859+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:02.371996+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:03.372137+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:04.372265+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:05.372411+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:06.372562+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:07.372724+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:08.372839+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:09.372966+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:10.373095+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:11.373272+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 720896 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:12.373410+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:13.373527+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:14.373657+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:15.373820+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:16.373949+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:17.374135+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:18.374320+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:19.374582+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:20.374795+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:21.374945+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:22.375224+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:23.375375+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:24.375549+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:25.375713+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:26.375890+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:27.376039+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:28.376170+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85344256 unmapped: 712704 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:29.376302+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:30.376407+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:31.376559+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:32.376669+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:33.376845+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:34.376977+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85352448 unmapped: 704512 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:35.377118+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 696320 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:36.377246+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:37.377377+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:38.377632+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:39.377779+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:40.377931+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:41.378097+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:42.378251+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:43.378467+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:44.378603+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:45.378730+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:46.378885+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:47.379013+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:48.379163+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:49.379330+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:50.380574+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:51.380743+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:52.381757+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:53.382168+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:54.382317+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:55.382430+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:56.382539+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:57.382753+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:58.382911+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:59.383037+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:00.383179+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:01.383325+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:02.383469+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:03.383688+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85368832 unmapped: 688128 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:04.383828+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:05.383987+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:06.384120+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:07.384283+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:08.384411+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:09.384545+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:10.384683+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:11.384852+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:12.385006+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:13.385146+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85377024 unmapped: 679936 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: mgrc ms_handle_reset ms_handle_reset con 0x5614daa2c000
Jan 20 19:27:23 compute-0 ceph-osd[87071]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/894791725
Jan 20 19:27:23 compute-0 ceph-osd[87071]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/894791725,v1:192.168.122.100:6801/894791725]
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: get_auth_request con 0x5614daa2dc00 auth_method 0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: mgrc handle_mgr_configure stats_period=5
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:14.385277+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 335872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:15.385414+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 335872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:16.385535+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85721088 unmapped: 335872 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 ms_handle_reset con 0x5614da393c00 session 0x5614dbb5c8c0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: handle_auth_request added challenge on 0x5614dbbba400
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:17.385685+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:18.385873+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:19.386055+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:20.386203+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:21.386349+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:22.386525+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:23.386646+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:24.386751+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:25.386878+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:26.387026+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:27.387144+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:28.387303+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:29.387462+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:30.387617+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:31.387767+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:32.387892+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:33.388023+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:34.388164+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:35.388305+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:36.388435+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:37.388603+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:38.388757+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:39.388889+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:40.389038+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:41.389203+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:42.389351+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:43.389511+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:44.389654+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:45.389788+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:46.389923+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:47.390044+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:48.390171+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:49.390319+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85860352 unmapped: 196608 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:50.390497+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 188416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:51.390679+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 188416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:52.390867+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85868544 unmapped: 188416 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.885894775s of 300.123291016s, submitted: 90
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:53.391029+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 565248 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:54.391222+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85491712 unmapped: 565248 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:55.391416+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:56.391553+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:57.391695+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:58.391804+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:59.391948+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:00.392075+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:01.392232+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:02.392343+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:03.392461+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:04.392583+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:05.392708+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:06.392843+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:07.393081+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:08.393248+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:09.393394+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:10.393547+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:11.393774+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:12.393955+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:13.394093+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:14.394237+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:15.394432+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:16.394601+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:17.394767+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:18.394977+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:19.395102+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:20.395231+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:21.395407+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:22.395517+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:23.395700+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:24.395829+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:25.395943+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:26.396255+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85499904 unmapped: 557056 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:27.396415+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:28.396546+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:29.396724+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:30.396856+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:31.397042+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:32.397256+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:33.397428+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:34.397613+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:35.397782+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:36.397920+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:37.398119+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:38.398278+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:39.398420+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:40.398586+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:41.398783+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:42.398917+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:43.399062+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:44.399185+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:45.399349+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:46.399505+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:47.399635+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:48.399763+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:49.399891+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:50.400024+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:51.400188+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:52.400331+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 540672 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:53.400456+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 540672 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:54.400582+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85516288 unmapped: 540672 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:55.400726+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:56.400896+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:57.401062+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:58.401306+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:59.401419+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:00.401543+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:01.401689+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:02.401797+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:03.401959+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:04.402076+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:05.402214+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:06.402302+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:07.402483+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:08.402603+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:09.402774+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:10.402909+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:11.403063+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:12.403221+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:13.403418+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:14.403559+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:15.403727+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:16.403848+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:17.403978+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:18.404151+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:19.404297+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:20.404454+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:21.404659+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:22.404813+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:23.404975+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:24.405102+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:25.405258+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:26.405426+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:27.405620+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:28.405785+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85524480 unmapped: 532480 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:29.405904+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:30.406056+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:31.406270+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:32.406425+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:33.406552+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:34.406683+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:35.406764+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:36.406904+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:37.407040+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:38.407165+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:39.407288+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:40.407443+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:41.407601+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:42.407720+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:43.411108+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:44.411288+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:45.411440+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:46.411571+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:47.411679+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:48.411759+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:49.411902+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:50.412051+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:51.412317+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:52.412468+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:53.412586+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:54.413230+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85532672 unmapped: 524288 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:55.413341+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:56.413476+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:57.413698+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:58.413849+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:59.414003+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:00.414150+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:01.414419+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:02.414620+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:03.414759+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:04.414907+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:05.415085+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:06.415234+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:07.415506+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:08.415683+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:09.415853+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:10.415996+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:11.416203+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:12.416344+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:13.416503+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:14.416630+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:15.416773+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:16.416918+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:17.417070+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:18.417207+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:19.417349+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:20.417567+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85540864 unmapped: 516096 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:21.417804+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:22.417949+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:23.418126+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:24.418280+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:25.418418+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:26.418572+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:27.418685+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:28.418859+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:29.418995+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:30.419130+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:31.419270+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:32.419436+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:33.419569+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:34.419718+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:35.419821+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:36.420082+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:37.420263+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:38.420440+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:39.420644+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:40.420777+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:41.420906+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85549056 unmapped: 507904 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:42.421010+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:43.421164+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:44.421322+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:45.421421+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:46.421536+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:47.421665+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:48.421800+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:49.421957+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:50.422107+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:51.422278+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:52.422434+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:53.422565+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:54.422716+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:55.422870+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:56.423026+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:57.423226+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:58.423463+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:59.423642+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:00.423828+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:01.424021+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:02.424151+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 499712 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:03.424297+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:04.424459+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:05.424586+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:06.424734+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:07.424975+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:08.425564+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:09.426117+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:10.426470+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:11.426827+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:12.426959+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:13.427236+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:14.427502+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:15.427756+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:16.428002+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:17.428220+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:18.428444+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:19.428651+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:20.428849+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:21.429070+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:22.429263+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:23.429484+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:24.429659+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:25.429861+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:26.430024+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:27.430201+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:28.430444+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:29.430567+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:30.430809+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 491520 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:31.431177+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread fragmentation_score=0.000127 took=0.000074s
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:32.431703+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:33.431986+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:34.432173+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:35.432394+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:36.432761+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:37.432937+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:38.433134+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:39.433297+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:40.433476+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:41.433852+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:42.434016+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:43.434185+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:44.434459+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:45.434636+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:46.434776+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:47.435006+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:48.435245+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:49.435465+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:50.435764+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:51.436187+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:52.436399+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:53.436619+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:54.436881+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:55.437099+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85573632 unmapped: 483328 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:56.437282+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:57.437494+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:58.437683+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:59.437951+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:00.438230+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:01.438785+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:02.439018+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:03.439343+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 475136 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:04.439676+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 458752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:05.439874+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 458752 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Cumulative writes: 7128 writes, 29K keys, 7128 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 7128 writes, 1427 syncs, 5.00 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 224 writes, 336 keys, 224 commit groups, 1.0 writes per commit group, ingest: 0.12 MB, 0.00 MB/s
                                           Interval WAL: 224 writes, 112 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.02              0.00         1    0.016       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.03              0.00         1    0.029       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.003       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3d4b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 1e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.02              0.00         1    0.017       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.1 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614d8d3da30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.7e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:06.440166+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:07.440532+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:08.440843+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:09.441116+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:10.441450+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:11.441784+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:12.442015+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:13.442260+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:14.442460+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:15.442671+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:16.442850+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:17.443035+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85630976 unmapped: 425984 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:18.443263+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:19.443534+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:20.443754+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:21.444100+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:22.444439+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:23.444667+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:24.444861+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:25.445071+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:26.445339+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85639168 unmapped: 417792 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:27.445699+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:28.445913+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:29.446151+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:30.446347+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:31.446594+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:32.446788+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:33.446979+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:34.447234+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:35.447467+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:36.447690+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:37.447888+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:38.448080+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:39.448347+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:40.448617+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:41.448896+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:42.449090+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:43.449265+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:44.449471+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:45.449630+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:46.449809+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:47.449987+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:48.450177+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:49.450432+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:50.450649+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:51.450877+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:52.450986+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 299.933715820s of 299.960906982s, submitted: 22
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:53.451133+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 85508096 unmapped: 548864 heap: 86056960 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:54.451238+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 688128 heap: 87105536 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:55.451341+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:56.451435+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:57.451593+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:58.451707+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:59.451856+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:00.451997+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:01.452175+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:02.452316+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:03.452497+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:04.452677+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:05.452795+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:06.452913+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:07.453058+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:08.453241+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:09.453390+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:10.453519+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:11.453745+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:12.453950+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:13.454072+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:14.454204+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:15.454411+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:16.454856+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:17.455117+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:18.455416+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:19.455628+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:20.455925+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:21.456216+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:22.456471+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:23.456716+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:24.456948+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:25.457142+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:26.457421+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:27.457613+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:28.457765+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:29.457963+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:30.458146+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:31.458312+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:32.458466+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:33.458636+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:34.458812+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:35.458953+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:36.459081+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:37.459213+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:38.459347+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:39.459526+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:40.459655+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:41.459802+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:42.459983+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:43.460141+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:44.460294+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:45.460460+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:46.460577+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:47.460747+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:48.460960+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:49.461127+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:50.461246+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:51.461413+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:52.461539+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:53.461706+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:54.461842+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:55.461961+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:56.462135+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:57.462303+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:58.462495+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:59.462627+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:00.462853+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:01.462997+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:02.463146+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:03.463335+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:04.463514+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:05.463644+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:06.463786+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:07.463942+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:08.464064+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:09.464224+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:10.464408+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:11.464632+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:12.464808+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:13.464958+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:14.465084+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:15.465188+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:16.465318+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:17.465449+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:18.465618+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:19.465728+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:20.465890+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:21.466062+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:22.466207+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:23.466429+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:24.466657+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:25.466818+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:26.466986+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:27.467165+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:28.467349+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:29.467562+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:30.467680+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:31.467823+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:32.468017+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:33.468164+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:34.468425+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:35.468686+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86417408 unmapped: 1736704 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:36.468811+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:37.469016+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:38.469185+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:39.469330+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:40.469478+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:41.469701+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:42.469882+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:43.470086+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:44.470244+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:45.470433+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:46.470556+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:47.470836+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:48.470963+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:49.471087+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:50.471223+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:51.471424+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:52.471575+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:53.471775+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:54.471917+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:55.472032+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:56.472215+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:57.472342+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:58.472476+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:59.472626+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:00.472753+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:01.472905+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:02.473037+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:03.473165+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:04.473301+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:05.473431+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:06.473570+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:07.473725+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86425600 unmapped: 1728512 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:08.473858+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:09.473993+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:10.474133+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:11.474295+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:12.474428+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:13.474548+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:14.474671+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:15.474835+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:16.474959+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:17.475239+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:18.475430+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:19.475587+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:20.475754+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:21.475954+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:22.476140+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:23.476264+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:24.476452+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:25.476602+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:26.476769+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:27.476917+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:28.477057+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:29.477208+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:30.477407+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:31.477582+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:32.477703+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86433792 unmapped: 1720320 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:33.477835+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:34.477988+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:35.478107+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:36.478213+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:37.478833+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:38.479976+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:39.480993+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:40.481164+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:41.481331+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:42.481465+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:43.481588+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:44.481728+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:45.481862+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:46.482038+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:47.482204+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:48.482385+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: osd.1 122 heartbeat osd_stat(store_statfs(0x4fce43000/0x0/0x4ffc00000, data 0x12a599/0x1e9000, compress 0x0/0x0/0x0, omap 0x13442, meta 0x2bbcbbe), peers [0,2] op hist [])
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:49.482534+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86441984 unmapped: 1712128 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:50.482676+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}'
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86507520 unmapped: 1646592 heap: 88154112 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'config show' '{prefix=config show}'
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:23 compute-0 ceph-osd[87071]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:23 compute-0 ceph-osd[87071]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1002443 data_alloc: 218103808 data_used: 14031
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:51.482865+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 86818816 unmapped: 2383872 heap: 89202688 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: tick
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_tickets
Jan 20 19:27:23 compute-0 ceph-osd[87071]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:52.482987+0000)
Jan 20 19:27:23 compute-0 ceph-osd[87071]: prioritycache tune_memory target: 4294967296 mapped: 87105536 unmapped: 2097152 heap: 89202688 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:23 compute-0 ceph-osd[87071]: do_command 'log dump' '{prefix=log dump}'
Jan 20 19:27:24 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:27:24 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Jan 20 19:27:24 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4264928438' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 20 19:27:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:27:24 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/4264928438' entity='client.admin' cmd={"prefix": "osd crush class ls"} : dispatch
Jan 20 19:27:24 compute-0 rsyslogd[1007]: imjournal from <np0005589310:ceph-osd>: begin to drop messages due to rate-limiting
Jan 20 19:27:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Jan 20 19:27:24 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707810226' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 20 19:27:24 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Jan 20 19:27:24 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1944092360' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 20 19:27:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Jan 20 19:27:25 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3545263584' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 20 19:27:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Jan 20 19:27:25 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2603557667' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 20 19:27:25 compute-0 ceph-mon[75120]: pgmap v859: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:25 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1707810226' entity='client.admin' cmd={"prefix": "mgr dump", "format": "json-pretty"} : dispatch
Jan 20 19:27:25 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1944092360' entity='client.admin' cmd={"prefix": "osd crush dump"} : dispatch
Jan 20 19:27:25 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3545263584' entity='client.admin' cmd={"prefix": "osd crush rule ls"} : dispatch
Jan 20 19:27:25 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2603557667' entity='client.admin' cmd={"prefix": "mgr metadata", "format": "json-pretty"} : dispatch
Jan 20 19:27:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Jan 20 19:27:25 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028248735' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 20 19:27:25 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Jan 20 19:27:25 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1457956596' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 20 19:27:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Jan 20 19:27:26 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2184354583' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 20 19:27:26 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Jan 20 19:27:26 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1411199381' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 20 19:27:26 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3028248735' entity='client.admin' cmd={"prefix": "osd crush show-tunables"} : dispatch
Jan 20 19:27:26 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1457956596' entity='client.admin' cmd={"prefix": "mgr module ls", "format": "json-pretty"} : dispatch
Jan 20 19:27:26 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2184354583' entity='client.admin' cmd={"prefix": "osd crush tree", "show_shadow": true} : dispatch
Jan 20 19:27:26 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1411199381' entity='client.admin' cmd={"prefix": "mgr services", "format": "json-pretty"} : dispatch
Jan 20 19:27:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Jan 20 19:27:26 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1134090533' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 20 19:27:26 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Jan 20 19:27:26 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2993692203' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 20 19:27:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Jan 20 19:27:27 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1738026789' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 20 19:27:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Jan 20 19:27:27 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3358894220' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 20 19:27:27 compute-0 ceph-mon[75120]: pgmap v860: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:27 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1134090533' entity='client.admin' cmd={"prefix": "osd erasure-code-profile ls"} : dispatch
Jan 20 19:27:27 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2993692203' entity='client.admin' cmd={"prefix": "mgr stat", "format": "json-pretty"} : dispatch
Jan 20 19:27:27 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1738026789' entity='client.admin' cmd={"prefix": "osd metadata"} : dispatch
Jan 20 19:27:27 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3358894220' entity='client.admin' cmd={"prefix": "mgr versions", "format": "json-pretty"} : dispatch
Jan 20 19:27:27 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd utilization"} v 0)
Jan 20 19:27:27 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3633708234' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 20 19:27:27 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14512 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:28 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14514 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:28 compute-0 sudo[247374]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:27:28 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:28 compute-0 sudo[247374]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:28 compute-0 sudo[247374]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:28 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14516 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:28 compute-0 sudo[247402]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --timeout 895 gather-facts
Jan 20 19:27:28 compute-0 sudo[247402]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:28 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3633708234' entity='client.admin' cmd={"prefix": "osd utilization"} : dispatch
Jan 20 19:27:28 compute-0 ceph-mon[75120]: from='client.14512 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:28 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14518 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:28 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:28 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} v 0)
Jan 20 19:27:28 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020651 4 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.a( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.021331 4 0.001386
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020554 4 0.000058
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020444 4 0.000078
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.c( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.15( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000053 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020124 4 0.000074
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1d( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020398 4 0.000113
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.19( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.18( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020086 4 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019997 4 0.000071
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019814 4 0.000102
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.1a( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.1f( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[8.14( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.019581 4 0.000100
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.018683 4 0.000070
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[11.17( empty local-lis/les=53/54 n=0 ec=51/37 lis/c=53/51 les/c/f=54/52/0 sis=53) [0] r=0 lpr=53 pi=[51,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[7.1b( empty local-lis/les=53/54 n=0 ec=47/23 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.020383 4 0.000154
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[3.12( empty local-lis/les=53/54 n=0 ec=43/17 lis/c=53/43 les/c/f=54/44/0 sis=53) [0] r=0 lpr=53 pi=[43,53)/1 crt=0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055046 7 0.000553
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055671 7 0.000130
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.053909 7 0.000126
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.054293 7 0.000142
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.055887 7 0.001254
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.053749 7 0.000066
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.058470 7 0.003222
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.054451 7 0.000082
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.026898 1 0.000021
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.027031 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.18( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 1.057736 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.062667 7 0.000131
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.062416 7 0.000093
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.061955 7 0.000131
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.061830 7 0.000376
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.062721 7 0.000058
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.067282 7 0.000087
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.064264 7 0.000149
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.062909 7 0.000078
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.063732 7 0.000067
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.067799 7 0.000103
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.061687 7 0.000098
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.061909 7 0.000160
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.061670 7 0.000109
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.093738 2 0.000044
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.093774 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.087637 1 0.000076
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000011 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.e( v 50'19 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.d( v 50'19 lc 36'5 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.088510 1 0.000507
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.d( v 50'19 lc 36'5 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.d( v 50'19 lc 36'5 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[10.d( v 50'19 lc 36'5 (0'0,50'19] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 65748992 unmapped: 2318336 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x561429dae800
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:23.680907+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 54 handle_osd_map epochs [55,55], i have 54, src has [1,55]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.109261 2 0.000132
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.109284 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 54 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.362589 23 0.000171
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.369340 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 12.378690 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 12.378722 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636933327s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894454956s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636891365s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894454956s@ mbc={}] exit Reset 0.000082 1 0.000121
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636891365s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894454956s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636891365s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894454956s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636891365s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894454956s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636891365s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894454956s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.636891365s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894454956s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.363659 23 0.000246
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.369857 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 12.380464 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 12.380493 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635606766s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894157410s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.363836 23 0.000134
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635582924s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] exit Reset 0.000055 1 0.000280
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635582924s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635582924s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.369990 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635582924s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 12.380440 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635582924s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] exit Start 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 12.380479 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635582924s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894157410s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 11.363497 23 0.000388
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635617256s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894279480s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 11.369998 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 12.380111 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 12.380165 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635570526s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894279480s@ mbc={}] exit Reset 0.000085 1 0.000140
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635570526s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894279480s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635570526s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894279480s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635570526s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894279480s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635570526s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894279480s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635570526s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894279480s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635735512s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 active pruub 94.894470215s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635711670s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894470215s@ mbc={}] exit Reset 0.000061 1 0.000093
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635711670s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894470215s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635711670s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894470215s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635711670s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894470215s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635711670s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894470215s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55 pruub=12.635711670s) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 94.894470215s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.123948 6 0.000763
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.128131 6 0.000053
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.129815 6 0.000039
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.129612 6 0.000048
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.11( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.126925 6 0.000535
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.127357 6 0.000052
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.127784 6 0.000050
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.129343 6 0.000041
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.125535 6 0.001067
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] exit Started/Stray 1.128669 6 0.000035
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=8 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.130822 6 0.000050
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.7( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=3 mbc={}] exit Started/Stray 1.127635 6 0.000112
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1b( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=3 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.5( v 50'484 lc 0'0 (0'0,50'484] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=50'484 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.131686 6 0.000043
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.5( v 50'484 lc 0'0 (0'0,50'484] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=50'484 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.5( v 50'484 lc 0'0 (0'0,50'484] local-lis/les=0/0 n=7 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=50'484 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.127589 6 0.000055
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.19( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.129185 6 0.000080
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1d( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.128740 6 0.000145
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.1f( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=49/49 les/c/f=50/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 crt=39'483 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 55 handle_osd_map epochs [55,55], i have 55, src has [1,55]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 66273280 unmapped: 1794048 heap: 68067328 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.d( v 55'20 (0'0,55'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 pct=50'19 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovering 1.207570 4 0.000074
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.d( v 55'20 (0'0,55'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 pct=50'19 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.d( v 55'20 (0'0,55'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 pct=50'19 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.d( v 55'20 (0'0,55'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 pct=50'19 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.15( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.295631 4 0.000216
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.15( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.15( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000018 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.15( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.15( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003284 1 0.000186
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.15( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.15( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.15( v 54'20 (0'0,54'20] local-lis/les=53/54 n=0 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.298977 5 0.000108
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000021 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.6( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.440033 5 0.000045
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.440133 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.135563 1 0.000127
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000021 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.6( v 32'6 (0'0,32'6] local-lis/les=53/54 n=1 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.9( v 54'20 (0'0,54'20] local-lis/les=53/54 n=1 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.435263 4 0.000690
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.9( v 54'20 (0'0,54'20] local-lis/les=53/54 n=1 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.9( v 54'20 (0'0,54'20] local-lis/les=53/54 n=1 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.9( v 54'20 (0'0,54'20] local-lis/les=53/54 n=1 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active+recovery_wait mbc={255={}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.9( v 54'20 (0'0,54'20] local-lis/les=53/54 n=1 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.003428 1 0.000083
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.9( v 54'20 (0'0,54'20] local-lis/les=53/54 n=1 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 1.427983 5 0.000105
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.9( v 54'20 (0'0,54'20] local-lis/les=53/54 n=1 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[10.9( v 54'20 (0'0,54'20] local-lis/les=53/54 n=1 ec=49/35 lis/c=53/49 les/c/f=54/50/0 sis=53) [0] r=0 lpr=53 pi=[49,53)/1 crt=50'19 lcod 50'19 mlcod 50'19 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.f( v 32'6 lc 0'0 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.446565 5 0.000032
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.446638 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.156067 1 0.000102
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000018 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[8.f( v 32'6 (0'0,32'6] local-lis/les=53/54 n=0 ec=47/31 lis/c=53/47 les/c/f=54/48/0 sis=53) [0] r=0 lpr=53 pi=[47,53)/1 crt=32'6 mlcod 32'6 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.576706 4 0.000045
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.576809 4 0.000018
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1b( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.576741 4 0.000024
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.e( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.576820 4 0.000027
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.577301 4 0.000028
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.a( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.577251 4 0.000048
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.13( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.576937 4 0.000032
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1c( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.577051 4 0.000029
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.11( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.570218 4 0.000036
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.4( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.570370 4 0.000033
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.2( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.570408 4 0.000025
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.d( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.570509 4 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.f( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.570591 4 0.000032
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.1( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.570691 4 0.000033
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.7( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.570769 4 0.000020
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.571088 4 0.000029
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.9( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.571239 4 0.000023
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.5( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.571254 4 0.000026
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.8( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.571379 4 0.000043
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.10( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.571483 4 0.000058
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.12( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.571612 4 0.000039
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.14( empty local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.605012 5 0.000039
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.605047 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 1.511542 4 0.000059
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.496080 4 0.000053
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.165272 1 0.000263
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.158725 1 0.000210
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000481 1 0.000073
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.066885 1 0.000048
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.643741 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1b( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.699466 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.670440 5 0.000040
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 1.670500 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000088 1 0.000125
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.552115 3 0.000079
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000142 1 0.000046
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 lc 39'136 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.074336 1 0.000072
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.651138 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.e( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.705108 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.081957 1 0.000140
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.658720 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1a( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.713824 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.088710 1 0.000151
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.665610 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.719964 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.095777 1 0.000081
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.673154 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.a( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.729091 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.102731 1 0.000051
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.680034 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.13( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.733837 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.110829 1 0.000082
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.687841 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.1c( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.748155 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.117392 1 0.000145
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.694542 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.11( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [2] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.749045 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.124257 1 0.000105
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.694533 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.4( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.757283 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.131779 1 0.000045
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.702220 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.2( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.764705 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.138977 1 0.000045
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.709446 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.d( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.771316 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.146083 1 0.000070
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.716646 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.f( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.778651 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.1( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.153628 1 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.1( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.724281 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.1( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.787038 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.160875 1 0.000052
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.731629 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.7( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.798953 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.167964 1 0.000232
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.738773 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.9( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 2.803086 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.175104 1 0.000137
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.746326 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.9( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.809281 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.182288 1 0.000032
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.753560 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.5( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.817333 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.189510 1 0.000039
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.760801 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.8( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.828654 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.10( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.196798 1 0.000076
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.10( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.768219 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.10( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.830229 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.204208 1 0.000089
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.775765 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.12( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.837497 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.14( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.211278 1 0.000058
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.14( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 1.782934 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[4.14( empty lb MIN local-lis/les=43/44 n=0 ec=43/19 lis/c=43/43 les/c/f=44/44/0 sis=53) [1] r=-1 lpr=53 pi=[43,53)/1 crt=0'0 unknown NOTIFY mbc={}] exit Started 2.844651 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.274326 2 0.000118
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 1.785922 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.3( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 2.908905 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.288776 2 0.000114
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.784905 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 2.923069 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.311142 2 0.000195
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.476507 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 2.945740 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.318288 2 0.000081
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.477083 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.7( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 2.953736 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.333033 2 0.000055
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.333558 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.5( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 2.973129 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 DELETING pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.275010 2 0.000103
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.275152 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=53) [1] r=-1 lpr=53 pi=[45,53)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 2.975951 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.293600 1 0.000071
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.846112 3 0.000113
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000106 1 0.000085
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 lc 39'153 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.031512 1 0.000045
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 lc 39'131 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.877754 3 0.000123
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 lc 39'131 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 lc 39'131 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000090 1 0.000049
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 lc 39'131 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:24.681082+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.048618 1 0.000036
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 lc 39'81 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.926456 3 0.000059
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 lc 39'81 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 lc 39'81 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000099 1 0.000087
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 lc 39'81 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.040971 1 0.000030
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 lc 39'107 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.965814 3 0.000093
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 lc 39'107 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 lc 39'107 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000126 1 0.000139
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 55 pg[9.3( v 39'483 lc 39'107 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 55 handle_osd_map epochs [56,56], i have 55, src has [1,56]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.121349 1 0.000064
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.999376 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.124082 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.031224 1 0.000065
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.999013 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.126826 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000099 1 0.000310
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000050 1 0.000204
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000037 1 0.000044
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.002224 6 0.000115
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.002014 6 0.000087
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.153749 1 0.000036
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.999700 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.128003 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 1 0.000210
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000021 1 0.000031
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.073071 1 0.000068
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 0.999900 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000628 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.129801 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000847 1 0.001219
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000072 1 0.001558
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000267 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000033 1 0.000490
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002095 3 0.000047
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 lc 39'256 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 39'206 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepRecovering 0.033084 3 0.000059
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 lc 39'256 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 39'206 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001969 3 0.000025
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=14
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=14
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000724 3 0.000050
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000014 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001245 3 0.000156
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.006123 7 0.000054
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000102 1 0.000118
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.010048 7 0.000066
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000098 1 0.000044
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 DELETING pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.006484 1 0.000028
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.006679 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.2( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.012865 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 DELETING pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.009704 1 0.000066
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.009860 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.a( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 1.019952 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 68632576 unmapped: 483328 heap: 69115904 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.085315 3 0.000327
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.085369 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000097 1 0.000082
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 lc 39'139 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.146893 6 0.000055
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 lc 39'139 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 lc 39'139 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000170 1 0.000064
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 lc 39'139 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.165994 3 0.000551
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.166083 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000111 1 0.000079
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 DELETING pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.084213 2 0.000325
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.084396 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.6( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 1.172308 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 DELETING pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.024731 2 0.000316
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.024956 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[6.e( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=55) [1] r=-1 lpr=55 pi=[45,55)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 1.193487 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.058996 1 0.000252
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.11( v 39'483 lc 39'61 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.206780 6 0.000103
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.11( v 39'483 lc 39'61 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.11( v 39'483 lc 39'61 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000089 1 0.000060
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.11( v 39'483 lc 39'61 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.054491 1 0.000071
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.7( v 39'483 lc 39'49 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.259366 6 0.000100
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.7( v 39'483 lc 39'49 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.7( v 39'483 lc 39'49 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000209 1 0.000190
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.7( v 39'483 lc 39'49 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.052700 1 0.000104
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.19( v 39'483 lc 39'58 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.312011 6 0.000113
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.19( v 39'483 lc 39'58 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.19( v 39'483 lc 39'58 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000243 1 0.000038
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.19( v 39'483 lc 39'58 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.066510 1 0.000568
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 lc 0'0 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.379429 6 0.000116
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 lc 0'0 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 lc 0'0 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000110 1 0.000115
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 lc 0'0 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.068704 1 0.000041
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 lc 39'154 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.450052 6 0.000110
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 lc 39'154 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 lc 39'154 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000193 1 0.000062
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 lc 39'154 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.066683 1 0.000154
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.517585 6 0.000085
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000087 1 0.000080
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.f( v 39'483 lc 39'43 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.061501 1 0.000038
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.576940 6 0.000188
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000278 1 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 lc 39'88 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.039633 1 0.000042
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 lc 39'37 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.619513 6 0.000090
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 lc 39'37 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 lc 39'37 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000288 1 0.000103
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 lc 39'37 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=8 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 8.695466995s of 10.003705978s, submitted: 691
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.059620 1 0.000043
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 lc 39'102 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.677031 6 0.000101
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 lc 39'102 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 lc 39'102 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000069 1 0.000045
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 lc 39'102 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.038549 1 0.000034
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 lc 39'73 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.716187 6 0.000074
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 lc 39'73 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 lc 39'73 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=3 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000091 1 0.000088
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 lc 39'73 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=3 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.024450 1 0.000087
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 lc 39'256 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.741697 2 0.000578
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 lc 39'256 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 lc 39'256 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000077 1 0.000044
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 lc 39'256 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 56 heartbeat osd_stat(store_statfs(0x4fe127000/0x0/0x4ffc00000, data 0x48f60/0xa5000, compress 0x0/0x0/0x0, omap 0x8ed1, meta 0x1a2712f), peers [1,2] op hist [])
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.032822 1 0.000082
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 56 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:25.681270+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 56 handle_osd_map epochs [56,57], i have 56, src has [1,57]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.015990 2 0.000181
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.016876 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.567548 1 0.000036
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped mbc={}] exit Started/ReplicaActive 2.015910 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped mbc={}] exit Started 3.147623 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.756696 1 0.000038
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=50'484 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.018200 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.147847 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=50'484 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] exit Reset 0.000070 1 0.000105
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.811933 1 0.000075
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.018225 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.147597 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000285 1 0.000351
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000043 1 0.000077
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.017068 2 0.000057
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.019098 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.439516 1 0.000043
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.018813 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.146196 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.339285 1 0.000054
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.018863 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000030 1 0.000049
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.145813 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 1 0.000070
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.704960 1 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.501767 1 0.000103
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.017409 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.018911 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.148270 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.144472 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000065 1 0.000088
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.243427 1 0.000042
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000091 1 0.000121
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.017688 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.146385 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000041 1 0.000082
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.301350 1 0.000034
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.017071 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.146293 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.400149 1 0.000060
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.017120 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.145916 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000054 1 0.000083
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000026 1 0.000043
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.638333 1 0.000111
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.017261 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.144881 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000044 1 0.000066
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.276848 1 0.000039
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 2.017723 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.018262 2 0.000203
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.020722 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 3.145402 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=54) [0]/[1] r=-1 lpr=54 pi=[49,54)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.017923 2 0.000046
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.020058 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000050 1 0.000115
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002521 3 0.000125
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.13( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 57 handle_osd_map epochs [57,57], i have 57, src has [1,57]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002420 2 0.000066
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.002689 2 0.000045
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004208 4 0.000074
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002951 4 0.000072
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.003563 2 0.000485
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004835 2 0.000039
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005218 2 0.000031
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004789 2 0.000054
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004794 2 0.000028
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004009 3 0.000906
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004511 2 0.000041
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.9( v 39'483 (0'0,39'483] local-lis/les=56/57 n=7 ec=49/33 lis/c=56/49 les/c/f=57/50/0 sis=56) [0] r=0 lpr=56 pi=[49,56)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004710 2 0.000038
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004861 2 0.000039
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.004912 2 0.000033
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.005184 2 0.000031
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=0/0 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.003929 2 0.000055
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=17
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=17
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.004042 2 0.000044
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000011 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002254 2 0.000042
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000834 2 0.000047
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=21
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=21
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002304 2 0.000038
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=8
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=8
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000853 2 0.000044
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=21
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=21
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000829 2 0.000044
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=12
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=12
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000788 2 0.000055
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=11
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=11
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000737 2 0.000033
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=20
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=19
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=20
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=19
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000731 2 0.000029
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001078 2 0.000529
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=18
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=18
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001423 2 0.000045
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 57 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69206016 unmapped: 958464 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:26.681469+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 57 handle_osd_map epochs [58,58], i have 57, src has [1,58]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005275 2 0.000054
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011721 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004817 2 0.000061
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012045 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=50'484 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=55'486 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005150 2 0.000037
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011104 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005405 2 0.000053
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012244 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005620 2 0.000339
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012324 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005764 2 0.000036
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011740 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006025 2 0.000085
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011941 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006093 2 0.000050
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011767 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=54/55 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006033 2 0.000032
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011745 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006155 2 0.000027
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011857 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006216 2 0.000053
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011814 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.006648 2 0.000049
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.012087 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=54/55 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=55'486 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002639 3 0.000475
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.11( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=55'486 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.002922 3 0.000159
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=55'486 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=55'486 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.5( v 55'486 (0'0,55'486] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=55'486 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005496 3 0.000413
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000007 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005684 3 0.000179
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=54/49 les/c/f=55/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005054 3 0.000146
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005056 3 0.000125
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005192 3 0.000150
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004996 3 0.000099
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000013 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.3( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000004 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1d( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004912 3 0.000082
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004940 3 0.000107
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004516 3 0.000153
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005367 3 0.000790
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.1b( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.19( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 58 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/49 les/c/f=58/50/0 sis=57) [0] r=0 lpr=57 pi=[49,57)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:28 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69287936 unmapped: 876544 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 620299 data_alloc: 218103808 data_used: 1010
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:27.681745+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69304320 unmapped: 860160 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:28.681891+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:29.682064+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 11 sent 9 num 2 unsent 2 sending 2
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:04:58.867220+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.1f scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:04:58.877766+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.1f scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 917504 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 11)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:04:58.867220+0000 osd.0 (osd.0) 10 : cluster [DBG] 4.1f scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:04:58.877766+0000 osd.0 (osd.0) 11 : cluster [DBG] 4.1f scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:30.682329+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 13 sent 11 num 2 unsent 2 sending 2
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:04:59.864833+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.6 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:04:59.875517+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.6 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 58 heartbeat osd_stat(store_statfs(0x4fe109000/0x0/0x4ffc00000, data 0x4d15e/0xc3000, compress 0x0/0x0/0x0, omap 0x93c6, meta 0x1a26c3a), peers [1,2] op hist [])
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 917504 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 13)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:04:59.864833+0000 osd.0 (osd.0) 12 : cluster [DBG] 4.6 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:04:59.875517+0000 osd.0 (osd.0) 13 : cluster [DBG] 4.6 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:31.682715+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:28 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 917504 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 620339 data_alloc: 218103808 data_used: 1010
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:32.682864+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69246976 unmapped: 917504 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 58 handle_osd_map epochs [59,59], i have 58, src has [1,59]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f(unlocked)] enter Initial
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=0 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000212 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=0 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000056 1 0.000174
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000267 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000325 1 0.000511
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3(unlocked)] enter Initial
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=0 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000175 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=0 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000017 1 0.000034
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000264 1 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b(unlocked)] enter Initial
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=0 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000156 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=0 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000052
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000162 1 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7(unlocked)] enter Initial
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=0 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=0 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000025
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000101 1 0.000048
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetLog 0.003068 2 0.000283
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/GetMissing 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.002528 2 0.000158
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002593 2 0.000129
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetLog 0.002523 2 0.000056
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 59 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 59 handle_osd_map epochs [60,60], i have 59, src has [1,60]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 59 handle_osd_map epochs [59,60], i have 60, src has [1,60]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 21.148663 39 0.000228
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 21.154976 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 22.165044 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 22.165082 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.851051331s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 active pruub 102.894393921s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.850987434s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894393921s@ mbc={}] exit Reset 0.000145 1 0.000229
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.850987434s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894393921s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.850987434s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894393921s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.850987434s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894393921s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.850987434s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894393921s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.850987434s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894393921s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.586412 2 0.000090
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.589106 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.587050 2 0.000056
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=1 mbc={}] exit Started/Primary/Peering 0.589875 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 unknown m=1 mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 activating+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.588464 2 0.000075
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 0.591382 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering/WaitUpThru 0.589696 2 0.000105
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 peering m=3 mbc={}] exit Started/Primary/Peering 0.593239 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 21.150592 39 0.000250
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 unknown m=3 mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 21.156780 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 22.168079 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 22.168169 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.849040031s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 active pruub 102.894691467s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.848990440s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894691467s@ mbc={}] exit Reset 0.000099 1 0.000182
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.848990440s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894691467s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.848990440s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894691467s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.848990440s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894691467s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.848990440s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894691467s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60 pruub=10.848990440s) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 102.894691467s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 60 handle_osd_map epochs [60,60], i have 60, src has [1,60]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.006349 4 0.000436
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.007396 4 0.000492
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000146 1 0.000051
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/Activating 0.007237 4 0.000478
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/Activating 0.005779 4 0.000827
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067831 2 0.000058
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000026 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.3( v 39'39 (0'0,39'39] local-lis/les=59/60 n=2 ec=45/22 lis/c=59/53 les/c/f=60/54/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.068085 2 0.000044
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 lc 39'21 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:33.683078+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.058642 1 0.000091
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000009 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.7( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.126776 2 0.000086
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 0'0 active+recovery_wait+degraded m=1 mbc={255={(0+1)=1}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.066574 1 0.000075
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000037 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.193256 2 0.000054
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000025 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 lc 39'1 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=3 mbc={255={(0+1)=3}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69271552 unmapped: 892928 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.221905 1 0.000102
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 60 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/53 les/c/f=60/55/0 sis=59) [0] r=0 lpr=59 pi=[53,59)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 60 handle_osd_map epochs [61,61], i have 60, src has [1,61]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.034006 6 0.000081
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.031761 6 0.000114
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.008043 3 0.000065
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.008090 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000098 1 0.000102
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:34.683283+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 DELETING pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.124184 2 0.000179
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.124418 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.c( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 1.164350 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.b scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.b scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.272086 3 0.000126
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ReplicaActive 0.272169 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000171 1 0.000178
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69337088 unmapped: 827392 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 DELETING pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete/Deleting 0.032168 2 0.000273
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started/ToDelete 0.032438 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 61 pg[6.4( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=2 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=60) [1] r=-1 lpr=60 pi=[45,60)/1 pct=0'0 crt=39'39 lcod 0'0 active mbc={}] exit Started 1.338715 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:35.683574+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 15 sent 13 num 2 unsent 2 sending 2
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:04.812461+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.b scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:04.882215+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.b scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 811008 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 15)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:04.812461+0000 osd.0 (osd.0) 14 : cluster [DBG] 4.b scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:04.882215+0000 osd.0 (osd.0) 15 : cluster [DBG] 4.b scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:36.684523+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.122147560s of 11.288543701s, submitted: 138
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 61 heartbeat osd_stat(store_statfs(0x4fe0fe000/0x0/0x4ffc00000, data 0x5225b/0xcc000, compress 0x0/0x0/0x0, omap 0xb55c, meta 0x1a24aa4), peers [1,2] op hist [])
Jan 20 19:27:28 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:28 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69353472 unmapped: 811008 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 640305 data_alloc: 218103808 data_used: 1010
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:37.684718+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 17 sent 15 num 2 unsent 2 sending 2
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:06.748862+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.3 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:06.759471+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.3 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.0 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.0 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69361664 unmapped: 802816 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 17)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:06.748862+0000 osd.0 (osd.0) 16 : cluster [DBG] 4.3 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:06.759471+0000 osd.0 (osd.0) 17 : cluster [DBG] 4.3 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:38.684967+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 19 sent 17 num 2 unsent 2 sending 2
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:07.757282+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.0 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:07.767545+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.0 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 794624 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 19)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:07.757282+0000 osd.0 (osd.0) 18 : cluster [DBG] 4.0 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:07.767545+0000 osd.0 (osd.0) 19 : cluster [DBG] 4.0 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:39.685255+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.c scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.c scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69369856 unmapped: 794624 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 61 heartbeat osd_stat(store_statfs(0x4fe100000/0x0/0x4ffc00000, data 0x5225b/0xcc000, compress 0x0/0x0/0x0, omap 0xb55c, meta 0x1a24aa4), peers [1,2] op hist [])
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 61 handle_osd_map epochs [62,62], i have 61, src has [1,62]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d(unlocked)] enter Initial
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=0 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000092 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=0 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000023
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000089 1 0.000037
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5(unlocked)] enter Initial
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=0 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000060 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=0 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000007 1 0.000018
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000060 1 0.000042
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001434 2 0.000051
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000012 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetLog 0.001310 2 0.000030
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/GetMissing 0.000003 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 62 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:40.685406+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 21 sent 19 num 2 unsent 2 sending 2
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:09.726610+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.c scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:09.737185+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.c scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69492736 unmapped: 671744 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 21)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:09.726610+0000 osd.0 (osd.0) 20 : cluster [DBG] 4.c scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:09.737185+0000 osd.0 (osd.0) 21 : cluster [DBG] 4.c scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 62 handle_osd_map epochs [62,63], i have 62, src has [1,63]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.018029 2 0.000052
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.019479 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering/WaitUpThru 1.019550 2 0.000077
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 peering m=2 mbc={}] exit Started/Primary/Peering 1.021155 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 0'0 unknown m=2 mbc={}] enter Started/Primary/Active
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Activating
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=53/53 les/c/f=54/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002449 4 0.000282
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 63 handle_osd_map epochs [63,63], i have 63, src has [1,63]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000350 1 0.000051
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 lc 39'11 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/Activating 0.002797 4 0.000267
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.067672 2 0.000117
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.5( v 39'39 (0'0,39'39] local-lis/les=62/63 n=2 ec=45/22 lis/c=62/53 les/c/f=63/55/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.066293 2 0.000143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000007 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 lc 39'13 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 lcod 0'0 mlcod 0'0 active+recovery_wait+degraded m=2 mbc={255={(0+1)=2}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovering 0.125689 1 0.000118
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active/Recovered 0.000012 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 63 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/53 les/c/f=63/54/0 sis=62) [0] r=0 lpr=62 pi=[53,62)/1 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Started/Primary/Active/Clean
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:41.685621+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 3 last_log 24 sent 21 num 3 unsent 3 sending 3
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:10.723992+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.15 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:10.734579+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.15 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:11.682706+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.16 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:28 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 581632 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 665055 data_alloc: 218103808 data_used: 1010
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 24)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:10.723992+0000 osd.0 (osd.0) 22 : cluster [DBG] 4.15 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:10.734579+0000 osd.0 (osd.0) 23 : cluster [DBG] 4.15 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:11.682706+0000 osd.0 (osd.0) 24 : cluster [DBG] 4.16 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:42.685868+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 1 last_log 25 sent 24 num 1 unsent 1 sending 1
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:11.693286+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.16 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69582848 unmapped: 581632 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 63 handle_osd_map epochs [64,64], i have 63, src has [1,64]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 25)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:11.693286+0000 osd.0 (osd.0) 25 : cluster [DBG] 4.16 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:43.686074+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69591040 unmapped: 573440 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 64 handle_osd_map epochs [65,66], i have 64, src has [1,66]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 17.529025 23 0.000186
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 17.534853 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 18.546002 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started 18.546053 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 18.542708 25 0.000168
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 18.546997 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 19.566111 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active mbc={}] exit Started 19.566144 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457354546s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 active pruub 116.284957886s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470577240s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298309326s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457168579s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY pruub 116.284957886s@ mbc={}] exit Reset 0.000231 1 0.000301
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457168579s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY pruub 116.284957886s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457168579s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY pruub 116.284957886s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457168579s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY pruub 116.284957886s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457168579s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY pruub 116.284957886s@ mbc={}] exit Start 0.000014 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66 pruub=13.457168579s) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY pruub 116.284957886s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470458031s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298309326s@ mbc={}] exit Reset 0.000363 1 0.000439
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470458031s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298309326s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470458031s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298309326s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470458031s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298309326s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470458031s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298309326s@ mbc={}] exit Start 0.000044 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470458031s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298309326s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 17.529772 23 0.000117
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 17.535221 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 18.547574 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started 18.547606 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470116615s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298522949s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470049858s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298522949s@ mbc={}] exit Reset 0.000100 1 0.000173
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470049858s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298522949s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470049858s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298522949s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470049858s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298522949s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470049858s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298522949s@ mbc={}] exit Start 0.000045 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.470049858s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298522949s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 17.530503 23 0.000141
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 17.535483 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 18.547358 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] exit Started 18.547395 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469445229s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 active pruub 117.298500061s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469401360s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298500061s@ mbc={}] exit Reset 0.000085 1 0.000149
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469401360s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298500061s@ mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469401360s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298500061s@ mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469401360s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298500061s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469401360s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298500061s@ mbc={}] exit Start 0.000015 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 66 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66 pruub=14.469401360s) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY pruub 117.298500061s@ mbc={}] enter Started/Stray
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 66 handle_osd_map epochs [62,66], i have 66, src has [1,66]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:44.686247+0000)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 27 sent 25 num 2 unsent 2 sending 2
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:14.614340+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.17 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:14.624861+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.17 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:28 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 69648384 unmapped: 516096 heap: 70164480 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 66 handle_osd_map epochs [67,67], i have 66, src has [1,67]
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.010267 3 0.000152
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.010371 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.011058 3 0.000056
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.011104 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=66) [2] r=-1 lpr=66 pi=[56,66)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.009762 3 0.000063
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.009822 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000066 1 0.000095
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000077 1 0.000106
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000042 1 0.000047
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000035 1 0.000047
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.011145 3 0.000154
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.011232 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=66) [2] r=-1 lpr=66 pi=[57,66)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 27)
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:14.614340+0000 osd.0 (osd.0) 26 : cluster [DBG] 4.17 scrub starts
Jan 20 19:27:28 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:14.624861+0000 osd.0 (osd.0) 27 : cluster [DBG] 4.17 scrub ok
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.001193 1 0.001229
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.001140 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:28 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:28 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000085
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000027 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000005 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.001263 1 0.001276
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.001471 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000158 1 0.000106
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000046 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000008 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 67 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x561429daec00
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x561429daf000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 67 heartbeat osd_stat(store_statfs(0x4fe0eb000/0x0/0x4ffc00000, data 0x5b07b/0xdd000, compress 0x0/0x0/0x0, omap 0xc270, meta 0x1a23d90), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:45.686521+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 70631424 unmapped: 581632 heap: 71213056 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 67 handle_osd_map epochs [67,68], i have 67, src has [1,68]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 67 handle_osd_map epochs [68,68], i have 68, src has [1,68]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.997131 4 0.001195
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.998368 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996967 4 0.000147
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997177 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996777 4 0.001595
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.998404 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.996754 4 0.000130
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 0.997565 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=57/58 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/Activating 0.006577 5 0.000322
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/Activating 0.006684 5 0.000260
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.006608 5 0.000212
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000173 1 0.000040
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000296 1 0.000028
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=8}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/Activating 0.006945 5 0.000753
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 39'483 (0'0,39'483] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x561429daf400
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 68'484 (0'0,68'484] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 lcod 39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.058819 2 0.000026
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.f( v 68'484 (0'0,68'484] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 lcod 39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.059361 1 0.000012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000692 1 0.000126
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 68'484 (0'0,68'484] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 lcod 39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.030944 1 0.000093
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.17( v 68'484 (0'0,68'484] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=39'483 lcod 39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.091160 1 0.000046
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000969 1 0.000077
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=5}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 68 ms_handle_reset con 0x561429daf000 session 0x56142b324540
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 68 ms_handle_reset con 0x561429daf400 session 0x56142b3cb880
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 68 ms_handle_reset con 0x561429daec00 session 0x561429e0e540
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.039932 2 0.000087
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 39'49 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.131304 1 0.000047
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 39'49 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 39'49 active+recovery_wait+remapped mbc={255={(0+1)=7}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000810 1 0.000091
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 39'49 active+recovery_wait+remapped mbc={255={(0+1)=7}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.052749 2 0.000152
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 68 pg[9.7( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 4.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 68 heartbeat osd_stat(store_statfs(0x4fe0e7000/0x0/0x4ffc00000, data 0x5e6a9/0xe3000, compress 0x0/0x0/0x0, omap 0xc78c, meta 0x1a23874), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:46.686634+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 29 sent 27 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:16.614155+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:16.624737+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 70885376 unmapped: 1376256 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 685677 data_alloc: 218103808 data_used: 4647
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x561429227000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x56142b97a000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.587293625s of 10.324477196s, submitted: 79
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x56142b97a400
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:214: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:253: int rados::cls::fifo::{anonymous}::create_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*): FIFO already exists, reading from disk and comparing.
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 68 ms_handle_reset con 0x56142b97a000 session 0x56142908ddc0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 68 ms_handle_reset con 0x56142b97a400 session 0x56142908ce00
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 68 ms_handle_reset con 0x561429227000 session 0x56142908c540
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 29)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:16.614155+0000 osd.0 (osd.0) 28 : cluster [DBG] 4.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:16.624737+0000 osd.0 (osd.0) 29 : cluster [DBG] 4.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 68 handle_osd_map epochs [69,69], i have 68, src has [1,69]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.951771 1 0.000110
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active 1.049640 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary 2.048064 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started 2.048093 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[56,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956954956s) [2] async=[2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 active pruub 120.844100952s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956759453s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844100952s@ mbc={}] exit Reset 0.000253 1 0.000313
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956759453s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844100952s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956759453s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844100952s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956759453s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844100952s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956759453s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844100952s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69 pruub=14.956759453s) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844100952s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.984041 1 0.000097
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active 1.050225 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'486 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.857455 1 0.000118
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'486 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active 1.049584 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary 2.048615 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'486 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary 2.047180 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'486 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started 2.047221 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'486 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started 2.048672 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=68'484 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957309723s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 active pruub 120.844993591s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957247734s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 120.844993591s@ mbc={}] exit Reset 0.000102 1 0.000166
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957247734s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 120.844993591s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957247734s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 120.844993591s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956269264s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 active pruub 120.844009399s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957247734s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 120.844993591s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957247734s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 120.844993591s@ mbc={}] exit Start 0.000012 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.957247734s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 120.844993591s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956115723s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844009399s@ mbc={}] exit Reset 0.000238 1 0.000362
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956115723s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844009399s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956115723s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844009399s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956115723s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844009399s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956115723s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844009399s@ mbc={}] exit Start 0.000011 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.911577 1 0.000119
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active 1.050589 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary 2.047801 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started 2.047830 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956115723s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 120.844009399s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=67) [2]/[0] async=[2] r=0 lpr=67 pi=[57,67)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.956018448s) [2] async=[2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 active pruub 120.844070435s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.955755234s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY pruub 120.844070435s@ mbc={}] exit Reset 0.000315 1 0.000388
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.955755234s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY pruub 120.844070435s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.955755234s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY pruub 120.844070435s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.955755234s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY pruub 120.844070435s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.955755234s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY pruub 120.844070435s@ mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 69 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69 pruub=14.955755234s) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY pruub 120.844070435s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 69 handle_osd_map epochs [69,69], i have 69, src has [1,69]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:47.686888+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 1007616 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 69 handle_osd_map epochs [70,70], i have 69, src has [1,70]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.148513 6 0.000106
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started/Stray 1.148963 6 0.000175
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started/Stray 1.149615 6 0.000125
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 1.149564 6 0.000225
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001057 1 0.000074
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001225 2 0.000091
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001220 2 0.000069
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.001010 2 0.000032
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[57,69)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.071024 3 0.000273
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete 0.072172 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.1f( v 39'483 (0'0,39'483] lb MIN local-lis/les=67/68 n=6 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.220748 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:48.687165+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 31 sent 29 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:18.571516+0000 osd.0 (osd.0) 30 : cluster [DBG] 5.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:18.581798+0000 osd.0 (osd.0) 31 : cluster [DBG] 5.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] lb MIN local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[57,69)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.129874 2 0.000428
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] lb MIN local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete 0.131171 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.f( v 68'485 (0'0,68'485] lb MIN local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started 1.280183 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] lb MIN local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[56,69)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.159582 2 0.000128
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] lb MIN local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete 0.160886 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.17( v 68'485 (0'0,68'485] lb MIN local-lis/les=67/68 n=6 ec=49/33 lis/c=67/56 les/c/f=68/57/0 sis=69) [2] r=-1 lpr=69 pi=[56,69)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started 1.310569 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] lb MIN local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 DELETING pi=[57,69)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.211691 2 0.000405
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] lb MIN local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete 0.212774 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 70 pg[9.7( v 68'487 (0'0,68'487] lb MIN local-lis/les=67/68 n=7 ec=49/33 lis/c=67/57 les/c/f=68/58/0 sis=69) [2] r=-1 lpr=69 pi=[57,69)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started 1.362389 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 31)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:18.571516+0000 osd.0 (osd.0) 30 : cluster [DBG] 5.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:18.581798+0000 osd.0 (osd.0) 31 : cluster [DBG] 5.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:49.687448+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:50.687684+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 70 heartbeat osd_stat(store_statfs(0x4fe0e5000/0x0/0x4ffc00000, data 0x6171c/0xe1000, compress 0x0/0x0/0x0, omap 0xcc52, meta 0x1a233ae), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1089536 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:51.687847+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 33 sent 31 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:21.538616+0000 osd.0 (osd.0) 32 : cluster [DBG] 2.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:21.549155+0000 osd.0 (osd.0) 33 : cluster [DBG] 2.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1245184 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 652108 data_alloc: 218103808 data_used: 3483
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 33)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:21.538616+0000 osd.0 (osd.0) 32 : cluster [DBG] 2.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:21.549155+0000 osd.0 (osd.0) 33 : cluster [DBG] 2.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 70 handle_osd_map epochs [70,71], i have 70, src has [1,71]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 40.098239 73 0.001095
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active 40.105199 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary 41.114393 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started 41.114443 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=45) [0] r=0 lpr=45 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901213646s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 active pruub 126.894775391s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901094437s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.894775391s@ mbc={}] exit Reset 0.000218 1 0.000958
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901094437s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.894775391s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901094437s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.894775391s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901094437s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.894775391s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901094437s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.894775391s@ mbc={}] exit Start 0.000055 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 71 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71 pruub=15.901094437s) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY pruub 126.894775391s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 71 handle_osd_map epochs [71,71], i have 71, src has [1,71]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:52.688035+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 1245184 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 71 handle_osd_map epochs [72,72], i have 71, src has [1,72]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 71 handle_osd_map epochs [71,72], i have 72, src has [1,72]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 0.979538 7 0.000428
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000073 1 0.000051
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=-1 lpr=71 DELETING pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.001957 1 0.000040
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.002066 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 72 pg[6.8( v 39'39 (0'0,39'39] lb MIN local-lis/les=45/47 n=1 ec=45/22 lis/c=45/45 les/c/f=47/47/0 sis=71) [2] r=-1 lpr=71 pi=[45,71)/1 crt=39'39 lcod 0'0 unknown NOTIFY mbc={}] exit Started 0.981739 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 72 heartbeat osd_stat(store_statfs(0x4fe0e1000/0x0/0x4ffc00000, data 0x64f09/0xe7000, compress 0x0/0x0/0x0, omap 0xd18c, meta 0x1a22e74), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:53.688191+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71024640 unmapped: 1236992 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 72 handle_osd_map epochs [73,73], i have 72, src has [1,73]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9(unlocked)] enter Initial
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=0 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000109 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=0 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000083
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000133 1 0.000057
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001411 2 0.000093
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000019 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 73 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:54.688357+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 35 sent 33 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:24.553284+0000 osd.0 (osd.0) 34 : cluster [DBG] 2.8 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:24.563832+0000 osd.0 (osd.0) 35 : cluster [DBG] 2.8 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71041024 unmapped: 1220608 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 73 handle_osd_map epochs [73,74], i have 73, src has [1,74]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 73 handle_osd_map epochs [73,74], i have 74, src has [1,74]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.999228 2 0.000126
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000869 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=53/54 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=73/74 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 74 handle_osd_map epochs [74,74], i have 74, src has [1,74]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=73/74 n=1 ec=45/22 lis/c=53/53 les/c/f=54/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=73/74 n=1 ec=45/22 lis/c=73/53 les/c/f=74/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003217 4 0.000169
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=73/74 n=1 ec=45/22 lis/c=73/53 les/c/f=74/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=73/74 n=1 ec=45/22 lis/c=73/53 les/c/f=74/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000014 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 74 pg[6.9( v 39'39 (0'0,39'39] local-lis/les=73/74 n=1 ec=45/22 lis/c=73/53 les/c/f=74/54/0 sis=73) [0] r=0 lpr=73 pi=[53,73)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 35)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:24.553284+0000 osd.0 (osd.0) 34 : cluster [DBG] 2.8 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:24.563832+0000 osd.0 (osd.0) 35 : cluster [DBG] 2.8 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:55.688766+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 37 sent 35 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:25.559343+0000 osd.0 (osd.0) 36 : cluster [DBG] 5.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:25.569876+0000 osd.0 (osd.0) 37 : cluster [DBG] 5.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71106560 unmapped: 1155072 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 37)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:25.559343+0000 osd.0 (osd.0) 36 : cluster [DBG] 5.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:25.569876+0000 osd.0 (osd.0) 37 : cluster [DBG] 5.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 74 handle_osd_map epochs [75,75], i have 74, src has [1,75]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:56.688957+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71114752 unmapped: 1146880 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 671974 data_alloc: 218103808 data_used: 3483
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fe0dd000/0x0/0x4ffc00000, data 0x686fe/0xed000, compress 0x0/0x0/0x0, omap 0xd6d6, meta 0x1a2292a), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:57.689122+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71122944 unmapped: 1138688 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.265222549s of 11.401788712s, submitted: 70
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:58.689283+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 39 sent 37 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:28.475100+0000 osd.0 (osd.0) 38 : cluster [DBG] 10.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:28.485410+0000 osd.0 (osd.0) 39 : cluster [DBG] 10.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fe0d8000/0x0/0x4ffc00000, data 0x6a14d/0xf0000, compress 0x0/0x0/0x0, omap 0xd916, meta 0x1a226ea), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1122304 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 75 heartbeat osd_stat(store_statfs(0x4fe0d8000/0x0/0x4ffc00000, data 0x6a14d/0xf0000, compress 0x0/0x0/0x0, omap 0xd916, meta 0x1a226ea), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 39)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:28.475100+0000 osd.0 (osd.0) 38 : cluster [DBG] 10.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:28.485410+0000 osd.0 (osd.0) 39 : cluster [DBG] 10.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:04:59.689508+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 41 sent 39 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:29.480883+0000 osd.0 (osd.0) 40 : cluster [DBG] 5.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:29.491402+0000 osd.0 (osd.0) 41 : cluster [DBG] 5.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71139328 unmapped: 1122304 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.2 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.2 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 41)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:29.480883+0000 osd.0 (osd.0) 40 : cluster [DBG] 5.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:29.491402+0000 osd.0 (osd.0) 41 : cluster [DBG] 5.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:00.689780+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 43 sent 41 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:30.491514+0000 osd.0 (osd.0) 42 : cluster [DBG] 5.2 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:30.502030+0000 osd.0 (osd.0) 43 : cluster [DBG] 5.2 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 1114112 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 43)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:30.491514+0000 osd.0 (osd.0) 42 : cluster [DBG] 5.2 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:30.502030+0000 osd.0 (osd.0) 43 : cluster [DBG] 5.2 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:01.689995+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71147520 unmapped: 1114112 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 678285 data_alloc: 218103808 data_used: 4037
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 75 handle_osd_map epochs [76,76], i have 75, src has [1,76]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a(unlocked)] enter Initial
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=0 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000211 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=0 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000011 1 0.000033
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000169 1 0.000064
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( empty local-lis/les=0/0 n=0 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001943 2 0.000069
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 76 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 76 handle_osd_map epochs [76,77], i have 76, src has [1,77]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 76 handle_osd_map epochs [77,77], i have 77, src has [1,77]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.215555 2 0.000065
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.217730 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 lc 0'0 (0'0,39'39] local-lis/les=55/56 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=76/77 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=76/77 n=1 ec=45/22 lis/c=55/55 les/c/f=56/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=76/77 n=1 ec=45/22 lis/c=76/55 les/c/f=77/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.001502 4 0.000097
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=76/77 n=1 ec=45/22 lis/c=76/55 les/c/f=77/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=76/77 n=1 ec=45/22 lis/c=76/55 les/c/f=77/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000015 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 77 pg[6.a( v 39'39 (0'0,39'39] local-lis/les=76/77 n=1 ec=45/22 lis/c=76/55 les/c/f=77/56/0 sis=76) [0] r=0 lpr=76 pi=[55,76)/1 crt=39'39 lcod 0'0 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:02.690112+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 45 sent 43 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:32.550140+0000 osd.0 (osd.0) 44 : cluster [DBG] 10.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:32.560766+0000 osd.0 (osd.0) 45 : cluster [DBG] 10.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 77 heartbeat osd_stat(store_statfs(0x4fe0d7000/0x0/0x4ffc00000, data 0x6be87/0xf3000, compress 0x0/0x0/0x0, omap 0xdbc4, meta 0x1a2243c), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 1097728 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 77 handle_osd_map epochs [78,78], i have 77, src has [1,78]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 77 handle_osd_map epochs [77,78], i have 78, src has [1,78]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 29.878050 53 0.000666
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 30.078891 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 30.668790 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 30.668833 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928627014s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 active pruub 132.051498413s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928561211s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY pruub 132.051498413s@ mbc={}] exit Reset 0.000113 1 0.000182
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928561211s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY pruub 132.051498413s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928561211s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY pruub 132.051498413s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928561211s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY pruub 132.051498413s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928561211s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY pruub 132.051498413s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 78 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78 pruub=9.928561211s) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY pruub 132.051498413s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 45)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:32.550140+0000 osd.0 (osd.0) 44 : cluster [DBG] 10.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:32.560766+0000 osd.0 (osd.0) 45 : cluster [DBG] 10.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 78 handle_osd_map epochs [78,78], i have 78, src has [1,78]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:03.690300+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1089536 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 78 handle_osd_map epochs [79,79], i have 78, src has [1,79]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 1.020908 6 0.000085
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:04.690422+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007698 3 0.000157
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.007755 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000083 1 0.000094
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 DELETING pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.010100 2 0.000175
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.010234 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 79 pg[6.b( v 39'39 (0'0,39'39] lb MIN local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=78) [1] r=-1 lpr=78 pi=[59,78)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 1.039028 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71262208 unmapped: 999424 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:05.690564+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fe0ce000/0x0/0x4ffc00000, data 0x7105f/0xfc000, compress 0x0/0x0/0x0, omap 0xe3a9, meta 0x1a21c57), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:06.690707+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71270400 unmapped: 991232 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 693262 data_alloc: 218103808 data_used: 4037
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:07.690822+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 79 heartbeat osd_stat(store_statfs(0x4fe0ce000/0x0/0x4ffc00000, data 0x7105f/0xfc000, compress 0x0/0x0/0x0, omap 0xe3a9, meta 0x1a21c57), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.017313957s of 10.072425842s, submitted: 23
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:08.690982+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 47 sent 45 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:38.547569+0000 osd.0 (osd.0) 46 : cluster [DBG] 2.2 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:38.558104+0000 osd.0 (osd.0) 47 : cluster [DBG] 2.2 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 47)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:38.547569+0000 osd.0 (osd.0) 46 : cluster [DBG] 2.2 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:38.558104+0000 osd.0 (osd.0) 47 : cluster [DBG] 2.2 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71278592 unmapped: 983040 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:09.691170+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 79 handle_osd_map epochs [80,80], i have 79, src has [1,80]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 974848 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 80 heartbeat osd_stat(store_statfs(0x4fe0d0000/0x0/0x4ffc00000, data 0x7105f/0xfc000, compress 0x0/0x0/0x0, omap 0xe3a9, meta 0x1a21c57), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:10.691293+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 80 handle_osd_map epochs [80,81], i have 80, src has [1,81]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71303168 unmapped: 958464 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 81 heartbeat osd_stat(store_statfs(0x4fe0cb000/0x0/0x4ffc00000, data 0x72d99/0xff000, compress 0x0/0x0/0x0, omap 0xe662, meta 0x1a2199e), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:11.691460+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 49 sent 47 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:41.569844+0000 osd.0 (osd.0) 48 : cluster [DBG] 5.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:41.580452+0000 osd.0 (osd.0) 49 : cluster [DBG] 5.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 81 handle_osd_map epochs [81,82], i have 81, src has [1,82]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 49)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:41.569844+0000 osd.0 (osd.0) 48 : cluster [DBG] 5.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:41.580452+0000 osd.0 (osd.0) 49 : cluster [DBG] 5.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71311360 unmapped: 950272 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 706864 data_alloc: 218103808 data_used: 4037
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:12.691691+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 82 handle_osd_map epochs [82,83], i have 82, src has [1,83]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71319552 unmapped: 942080 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:13.691939+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 51 sent 49 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:43.493757+0000 osd.0 (osd.0) 50 : cluster [DBG] 2.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:43.504294+0000 osd.0 (osd.0) 51 : cluster [DBG] 2.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 51)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:43.493757+0000 osd.0 (osd.0) 50 : cluster [DBG] 2.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:43.504294+0000 osd.0 (osd.0) 51 : cluster [DBG] 2.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 83 handle_osd_map epochs [84,84], i have 83, src has [1,84]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 83 handle_osd_map epochs [83,84], i have 84, src has [1,84]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 1097728 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:14.692240+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fcf1f000/0x0/0x4ffc00000, data 0x79795/0x10b000, compress 0x0/0x0/0x0, omap 0xf082, meta 0x2bc0f7e), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71163904 unmapped: 1097728 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:15.692415+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71172096 unmapped: 1089536 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 84 heartbeat osd_stat(store_statfs(0x4fcf1f000/0x0/0x4ffc00000, data 0x79795/0x10b000, compress 0x0/0x0/0x0, omap 0xf082, meta 0x2bc0f7e), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 84 handle_osd_map epochs [85,85], i have 84, src has [1,85]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=62) [0] r=0 lpr=62 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 35.191227 66 0.000300
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=62) [0] r=0 lpr=62 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 35.386341 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=62) [0] r=0 lpr=62 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 36.407528 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=62) [0] r=0 lpr=62 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 36.407582 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=62) [0] r=0 lpr=62 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.617008209s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 active pruub 147.753036499s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.616838455s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY pruub 147.753036499s@ mbc={}] exit Reset 0.000223 1 0.000347
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.616838455s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY pruub 147.753036499s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.616838455s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY pruub 147.753036499s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.616838455s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY pruub 147.753036499s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.616838455s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY pruub 147.753036499s@ mbc={}] exit Start 0.000055 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 85 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85 pruub=12.616838455s) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY pruub 147.753036499s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 85 handle_osd_map epochs [85,85], i have 85, src has [1,85]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:16.692573+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 85 handle_osd_map epochs [85,86], i have 85, src has [1,86]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.173250 7 0.000250
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 86 handle_osd_map epochs [86,86], i have 86, src has [1,86]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.072513 2 0.000067
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.072564 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000131 1 0.000126
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 DELETING pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.017638 2 0.000245
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.017877 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 86 pg[6.d( v 39'39 (0'0,39'39] lb MIN local-lis/les=62/63 n=1 ec=45/22 lis/c=62/62 les/c/f=63/63/0 sis=85) [1] r=-1 lpr=85 pi=[62,85)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 0.263831 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71057408 unmapped: 1204224 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 716199 data_alloc: 218103808 data_used: 4885
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:17.692734+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71065600 unmapped: 1196032 heap: 72261632 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 86 handle_osd_map epochs [87,87], i have 86, src has [1,87]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.035809517s of 10.118613243s, submitted: 20
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:18.692893+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71073792 unmapped: 2236416 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:19.693058+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71131136 unmapped: 2179072 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 87 handle_osd_map epochs [88,88], i have 87, src has [1,88]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active+clean] exit Started/Primary/Active/Clean 46.464594 85 0.000437
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary/Active 46.885859 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started/Primary 47.479156 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active mbc={255={}}] exit Started 47.479535 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=59) [0] r=0 lpr=59 crt=39'39 mlcod 39'39 active mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120479584s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 active pruub 148.052017212s@ mbc={255={}}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120371819s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY pruub 148.052017212s@ mbc={}] exit Reset 0.000184 1 0.000303
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120371819s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY pruub 148.052017212s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120371819s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY pruub 148.052017212s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120371819s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY pruub 148.052017212s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120371819s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY pruub 148.052017212s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 88 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88 pruub=9.120371819s) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY pruub 148.052017212s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:20.693225+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 88 handle_osd_map epochs [88,89], i have 88, src has [1,89]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY mbc={}] exit Started/Stray 0.431759 7 0.000105
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 crt=39'39 unknown NOTIFY mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 89 handle_osd_map epochs [89,89], i have 89, src has [1,89]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.130974 2 0.000074
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ReplicaActive 0.131010 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000110 1 0.000070
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 DELETING pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete/Deleting 0.024315 2 0.000175
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] exit Started/ToDelete 0.024480 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 89 pg[6.f( v 39'39 (0'0,39'39] lb MIN local-lis/les=59/60 n=1 ec=45/22 lis/c=59/59 les/c/f=60/60/0 sis=88) [2] r=-1 lpr=88 pi=[59,88)/1 pct=0'0 crt=39'39 active mbc={}] exit Started 0.587324 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 2367488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:21.693422+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 70942720 unmapped: 2367488 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 718461 data_alloc: 218103808 data_used: 4885
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 89 heartbeat osd_stat(store_statfs(0x4fcf11000/0x0/0x4ffc00000, data 0x82135/0x117000, compress 0x0/0x0/0x0, omap 0xff46, meta 0x2bc00ba), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:22.693579+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 2318336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 89 handle_osd_map epochs [90,90], i have 89, src has [1,90]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:23.693722+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 70991872 unmapped: 2318336 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:24.693851+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 90 handle_osd_map epochs [91,91], i have 90, src has [1,91]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71016448 unmapped: 2293760 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:25.694038+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 91 handle_osd_map epochs [91,92], i have 91, src has [1,92]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 2064384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:26.694230+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71245824 unmapped: 2064384 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 727241 data_alloc: 218103808 data_used: 4885
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:27.694422+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 53 sent 51 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:57.478956+0000 osd.0 (osd.0) 52 : cluster [DBG] 10.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:57.489504+0000 osd.0 (osd.0) 53 : cluster [DBG] 10.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 92 handle_osd_map epochs [92,93], i have 92, src has [1,93]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=68'484 lcod 68'484 mlcod 68'484 active+clean] exit Started/Primary/Active/Clean 61.892253 112 0.000784
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started/Primary/Active 61.894864 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started/Primary 62.911777 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] exit Started 62.912181 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=68'484 lcod 68'484 mlcod 68'484 active mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.109041214s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 active pruub 156.283859253s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.108978271s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 156.283859253s@ mbc={}] exit Reset 0.000115 1 0.000196
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.108978271s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 156.283859253s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.108978271s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 156.283859253s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.108978271s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 156.283859253s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.108978271s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 156.283859253s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 93 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93 pruub=10.108978271s) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY pruub 156.283859253s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 53)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:57.478956+0000 osd.0 (osd.0) 52 : cluster [DBG] 10.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:57.489504+0000 osd.0 (osd.0) 53 : cluster [DBG] 10.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 93 handle_osd_map epochs [92,93], i have 93, src has [1,93]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 93 heartbeat osd_stat(store_statfs(0x4fcf06000/0x0/0x4ffc00000, data 0x87409/0x120000, compress 0x0/0x0/0x0, omap 0x106ea, meta 0x2bbf916), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71254016 unmapped: 2056192 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 93 handle_osd_map epochs [94,94], i have 93, src has [1,94]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started/Stray 0.795096 3 0.000046
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] exit Started 0.795135 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=93) [2] r=-1 lpr=93 pi=[56,93)/1 crt=68'484 lcod 68'484 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] exit Reset 0.000059 1 0.000088
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] exit Start 0.000006 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000030 1 0.000038
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 94 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:28.694646+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71286784 unmapped: 2023424 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.659379959s of 10.818087578s, submitted: 21
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 94 handle_osd_map epochs [94,95], i have 94, src has [1,95]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 94 handle_osd_map epochs [95,95], i have 95, src has [1,95]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.004216 4 0.000063
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.004368 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'484 lcod 68'484 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 activating+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/Activating 0.005872 5 0.000559
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000224 1 0.000099
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000596 1 0.000043
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=6}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.056865 2 0.000066
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 95 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 sudo[247402]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:29.694840+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 55 sent 53 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:59.484431+0000 osd.0 (osd.0) 54 : cluster [DBG] 2.1c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:05:59.494879+0000 osd.0 (osd.0) 55 : cluster [DBG] 2.1c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 55)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:59.484431+0000 osd.0 (osd.0) 54 : cluster [DBG] 2.1c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:05:59.494879+0000 osd.0 (osd.0) 55 : cluster [DBG] 2.1c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71417856 unmapped: 1892352 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 95 handle_osd_map epochs [95,96], i have 95, src has [1,96]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 95 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.938333 1 0.000191
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary/Active 1.002304 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started/Primary 2.006732 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] exit Started 2.006757 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=94) [2]/[0] async=[2] r=0 lpr=94 pi=[56,94)/1 crt=68'485 lcod 68'484 mlcod 68'484 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003457069s) [2] async=[2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 active pruub 163.980422974s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003147125s) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 163.980422974s@ mbc={}] exit Reset 0.000407 1 0.000471
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003147125s) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 163.980422974s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003147125s) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 163.980422974s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003147125s) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 163.980422974s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003147125s) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 163.980422974s@ mbc={}] exit Start 0.000394 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 96 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96 pruub=15.003147125s) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY pruub 163.980422974s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 96 handle_osd_map epochs [96,96], i have 96, src has [1,96]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:30.695078+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71426048 unmapped: 1884160 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:31.695270+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71499776 unmapped: 1810432 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 743283 data_alloc: 218103808 data_used: 4995
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 96 handle_osd_map epochs [96,97], i have 96, src has [1,97]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/Stray 1.807722 7 0.000812
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000125 1 0.000052
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] lb MIN local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=-1 lpr=96 DELETING pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.046960 2 0.000241
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] lb MIN local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started/ToDelete 0.047173 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 97 pg[9.13( v 68'485 (0'0,68'485] lb MIN local-lis/les=94/95 n=6 ec=49/33 lis/c=94/56 les/c/f=95/57/0 sis=96) [2] r=-1 lpr=96 pi=[56,96)/1 crt=68'485 lcod 68'484 unknown NOTIFY mbc={}] exit Started 1.855446 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:32.695412+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 57 sent 55 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:02.444420+0000 osd.0 (osd.0) 56 : cluster [DBG] 5.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:02.454915+0000 osd.0 (osd.0) 57 : cluster [DBG] 5.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71516160 unmapped: 1794048 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 97 heartbeat osd_stat(store_statfs(0x4fcefd000/0x0/0x4ffc00000, data 0x8f990/0x12d000, compress 0x0/0x0/0x0, omap 0x11384, meta 0x2bbec7c), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 57)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:02.444420+0000 osd.0 (osd.0) 56 : cluster [DBG] 5.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:02.454915+0000 osd.0 (osd.0) 57 : cluster [DBG] 5.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 97 handle_osd_map epochs [98,98], i have 97, src has [1,98]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active+clean] exit Started/Primary/Active/Clean 67.561656 127 0.000406
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active 67.564779 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary 68.584976 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active mbc={}] exit Started 68.585697 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=56) [0] r=0 lpr=56 crt=39'483 mlcod 0'0 active mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.439196587s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 active pruub 164.286026001s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.438729286s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.286026001s@ mbc={}] exit Reset 0.000561 1 0.000836
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.438729286s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.286026001s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.438729286s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.286026001s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.438729286s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.286026001s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.438729286s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.286026001s@ mbc={}] exit Start 0.000205 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 98 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98 pruub=12.438729286s) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY pruub 164.286026001s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 98 handle_osd_map epochs [98,98], i have 98, src has [1,98]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 98 handle_osd_map epochs [99,99], i have 98, src has [1,99]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 0.127980 3 0.000383
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 0.128286 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=98) [1] r=-1 lpr=98 pi=[56,98)/1 crt=39'483 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Reset 0.000062 1 0.000091
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000032 1 0.000049
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000050 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 99 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:33.695581+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 59 sent 57 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:03.431972+0000 osd.0 (osd.0) 58 : cluster [DBG] 10.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:03.442541+0000 osd.0 (osd.0) 59 : cluster [DBG] 10.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 71573504 unmapped: 1736704 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 99 heartbeat osd_stat(store_statfs(0x4fcef8000/0x0/0x4ffc00000, data 0x9152c/0x130000, compress 0x0/0x0/0x0, omap 0x115f6, meta 0x2bbea0a), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 59)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:03.431972+0000 osd.0 (osd.0) 58 : cluster [DBG] 10.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:03.442541+0000 osd.0 (osd.0) 59 : cluster [DBG] 10.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 99 handle_osd_map epochs [99,100], i have 99, src has [1,100]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 99 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16(unlocked)] enter Initial
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=0 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000080 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=0 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000022
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000208 1 0.000045
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000029 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000261 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.005136 4 0.000111
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.005326 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=56/57 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 activating+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=56/56 les/c/f=57/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/Activating 0.003412 5 0.000549
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000126 1 0.000102
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000461 1 0.000049
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=4}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.028430 2 0.000077
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 100 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:34.695749+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 61 sent 59 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:04.465888+0000 osd.0 (osd.0) 60 : cluster [DBG] 10.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:04.476398+0000 osd.0 (osd.0) 61 : cluster [DBG] 10.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 100 heartbeat osd_stat(store_statfs(0x4fcef3000/0x0/0x4ffc00000, data 0x92fad/0x133000, compress 0x0/0x0/0x0, omap 0x1186a, meta 0x2bbe796), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 61)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:04.465888+0000 osd.0 (osd.0) 60 : cluster [DBG] 10.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:04.476398+0000 osd.0 (osd.0) 61 : cluster [DBG] 10.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 100 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.010716 2 0.000062
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.011010 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.011035 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=100) [0] r=0 lpr=100 pi=[67,100)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000099 1 0.000144
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.16( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.978999 1 0.000171
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary/Active 1.011747 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started/Primary 2.017125 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] exit Started 2.017145 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=99) [1]/[0] async=[1] r=0 lpr=99 pi=[56,99)/1 crt=39'483 mlcod 39'483 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991423607s) [1] async=[1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 active pruub 168.984329224s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991363525s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY pruub 168.984329224s@ mbc={}] exit Reset 0.000109 1 0.000146
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991363525s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY pruub 168.984329224s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991363525s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY pruub 168.984329224s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991363525s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY pruub 168.984329224s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991363525s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY pruub 168.984329224s@ mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 101 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101 pruub=14.991363525s) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY pruub 168.984329224s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 101 handle_osd_map epochs [101,101], i have 101, src has [1,101]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:35.696004+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 63 sent 61 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:05.506144+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:05.516713+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72736768 unmapped: 573440 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 63)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:05.506144+0000 osd.0 (osd.0) 62 : cluster [DBG] 5.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:05.516713+0000 osd.0 (osd.0) 63 : cluster [DBG] 5.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 101 handle_osd_map epochs [101,102], i have 101, src has [1,102]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.015968 6 0.000050
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 lc 0'0 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 crt=39'483 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.004674 3 0.000131
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000308 1 0.000057
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 lc 39'182 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 lcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/Stray 1.021475 7 0.000117
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000069 1 0.000040
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=-1 lpr=101 DELETING pi=[56,101)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.039608 2 0.000144
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started/ToDelete 0.039741 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.15( v 39'483 (0'0,39'483] lb MIN local-lis/les=99/100 n=6 ec=49/33 lis/c=99/56 les/c/f=100/57/0 sis=101) [1] r=-1 lpr=101 pi=[56,101)/1 crt=39'483 unknown NOTIFY mbc={}] exit Started 1.061262 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.057415 1 0.000067
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 102 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:36.696178+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 65 sent 63 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:06.458910+0000 osd.0 (osd.0) 64 : cluster [DBG] 10.8 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:06.469565+0000 osd.0 (osd.0) 65 : cluster [DBG] 10.8 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72630272 unmapped: 679936 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 760971 data_alloc: 218103808 data_used: 4733
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 65)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:06.458910+0000 osd.0 (osd.0) 64 : cluster [DBG] 10.8 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:06.469565+0000 osd.0 (osd.0) 65 : cluster [DBG] 10.8 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 102 handle_osd_map epochs [103,103], i have 102, src has [1,103]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.951708 1 0.000043
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started/ReplicaActive 1.014237 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 active+remapped mbc={}] exit Started 2.030274 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=101) [0]/[2] r=-1 lpr=101 pi=[67,101)/1 pct=0'0 crt=39'483 active+remapped mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 pct=0'0 crt=39'483 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Reset 0.000388 1 0.000498
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] exit Start 0.000099 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000054 1 0.000214
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=0/0 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=9
Jan 20 19:27:29 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=9
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001405 3 0.000064
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 103 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:37.696410+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 67 sent 65 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:07.460031+0000 osd.0 (osd.0) 66 : cluster [DBG] 2.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:07.470652+0000 osd.0 (osd.0) 67 : cluster [DBG] 2.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 67)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:07.460031+0000 osd.0 (osd.0) 66 : cluster [DBG] 2.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:07.470652+0000 osd.0 (osd.0) 67 : cluster [DBG] 2.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.011744 2 0.000133
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.013317 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=101/102 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=103/104 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 104 handle_osd_map epochs [103,104], i have 104, src has [1,104]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=103/104 n=6 ec=49/33 lis/c=101/67 les/c/f=102/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=103/104 n=6 ec=49/33 lis/c=103/67 les/c/f=104/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003148 3 0.000130
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=103/104 n=6 ec=49/33 lis/c=103/67 les/c/f=104/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=103/104 n=6 ec=49/33 lis/c=103/67 les/c/f=104/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000008 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 104 pg[9.16( v 39'483 (0'0,39'483] local-lis/les=103/104 n=6 ec=49/33 lis/c=103/67 les/c/f=104/68/0 sis=103) [0] r=0 lpr=103 pi=[67,103)/1 crt=39'483 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 104 handle_osd_map epochs [104,104], i have 104, src has [1,104]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:38.696653+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fceeb000/0x0/0x4ffc00000, data 0x99ad4/0x13f000, compress 0x0/0x0/0x0, omap 0x122d7, meta 0x2bbdd29), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72654848 unmapped: 655360 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:39.696838+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72663040 unmapped: 647168 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 104 heartbeat osd_stat(store_statfs(0x4fcee6000/0x0/0x4ffc00000, data 0x9b523/0x142000, compress 0x0/0x0/0x0, omap 0x12556, meta 0x2bbdaaa), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:40.696990+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72687616 unmapped: 622592 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.831189156s of 12.028363228s, submitted: 125
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:41.697191+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 69 sent 67 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:11.512778+0000 osd.0 (osd.0) 68 : cluster [DBG] 2.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:11.523190+0000 osd.0 (osd.0) 69 : cluster [DBG] 2.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 598016 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 769553 data_alloc: 218103808 data_used: 4733
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 69)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:11.512778+0000 osd.0 (osd.0) 68 : cluster [DBG] 2.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:11.523190+0000 osd.0 (osd.0) 69 : cluster [DBG] 2.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:42.697419+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 104 handle_osd_map epochs [105,105], i have 104, src has [1,105]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 581632 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:43.697576+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:44.697757+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72744960 unmapped: 565248 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:45.697957+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 105 handle_osd_map epochs [106,106], i have 105, src has [1,106]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fcee5000/0x0/0x4ffc00000, data 0x9d0bf/0x145000, compress 0x0/0x0/0x0, omap 0x127d7, meta 0x2bbd829), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=68'486 lcod 68'486 mlcod 68'486 active+clean] exit Started/Primary/Active/Clean 79.039672 153 0.000567
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary/Active 79.044712 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started/Primary 80.056554 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] exit Started 80.056628 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=57) [0] r=0 lpr=57 crt=68'486 lcod 68'486 mlcod 68'486 active mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961336136s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 active pruub 173.299713135s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961268425s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 173.299713135s@ mbc={}] exit Reset 0.000117 1 0.000254
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961268425s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 173.299713135s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961268425s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 173.299713135s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961268425s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 173.299713135s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961268425s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 173.299713135s@ mbc={}] exit Start 0.000013 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 107 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107 pruub=8.961268425s) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY pruub 173.299713135s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 548864 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:46.698117+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 107 handle_osd_map epochs [107,108], i have 107, src has [1,108]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 0.923786 3 0.000058
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] exit Started 0.923841 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=107) [2] r=-1 lpr=107 pi=[57,107)/1 crt=68'486 lcod 68'486 unknown NOTIFY mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Reset 0.000101 1 0.000134
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] exit Start 0.000008 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetInfo 0.001281 2 0.000048
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetLog 0.000028 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 108 handle_osd_map epochs [108,108], i have 108, src has [1,108]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000004 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 108 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72769536 unmapped: 540672 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 781369 data_alloc: 218103808 data_used: 4733
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:47.698299+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 108 handle_osd_map epochs [108,109], i have 108, src has [1,109]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 109 handle_osd_map epochs [108,109], i have 109, src has [1,109]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.097431 3 0.000074
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped+peering mbc={}] exit Started/Primary/Peering 1.098923 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=57/58 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'486 lcod 68'486 mlcod 0'0 remapped mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 activating+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 499712 heap: 73310208 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 109 handle_osd_map epochs [109,109], i have 109, src has [1,109]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=57/57 les/c/f=58/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/Activating 0.764480 5 0.000720
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitLocalRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitLocalRecoveryReserved 0.000421 1 0.000270
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/WaitRemoteRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] exit Started/Primary/Active/WaitRemoteRecoveryReserved 0.000414 1 0.000043
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 0'0 active+recovery_wait+remapped mbc={255={(0+1)=9}}] enter Started/Primary/Active/Recovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:48.698437+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovering 0.074158 2 0.000105
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 109 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 109 heartbeat osd_stat(store_statfs(0x4fced7000/0x0/0x4ffc00000, data 0xa3e35/0x151000, compress 0x0/0x0/0x0, omap 0x131ef, meta 0x2bbce11), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 109 handle_osd_map epochs [110,110], i have 109, src has [1,110]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active/Recovered 0.186616 1 0.000132
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary/Active 1.026681 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started/Primary 2.125675 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] exit Started 2.125706 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=108) [2]/[0] async=[2] r=0 lpr=108 pi=[57,108)/1 crt=68'487 lcod 68'486 mlcod 68'486 active+remapped mbc={255={}}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737841606s) [2] async=[2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 active pruub 183.126052856s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737683296s) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 183.126052856s@ mbc={}] exit Reset 0.000252 1 0.000317
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737683296s) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 183.126052856s@ mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737683296s) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 183.126052856s@ mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737683296s) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 183.126052856s@ mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737683296s) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 183.126052856s@ mbc={}] exit Start 0.000010 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 110 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110 pruub=15.737683296s) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY pruub 183.126052856s@ mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 110 handle_osd_map epochs [110,110], i have 110, src has [1,110]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1531904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:49.698591+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 71 sent 69 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:19.575055+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:19.585650+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 71)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:19.575055+0000 osd.0 (osd.0) 70 : cluster [DBG] 2.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:19.585650+0000 osd.0 (osd.0) 71 : cluster [DBG] 2.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/Stray 1.019779 7 0.000105
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000115 1 0.000085
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] lb MIN local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=-1 lpr=110 DELETING pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.069307 2 0.000231
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] lb MIN local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started/ToDelete 0.069488 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 111 pg[9.19( v 68'487 (0'0,68'487] lb MIN local-lis/les=108/109 n=6 ec=49/33 lis/c=108/57 les/c/f=109/58/0 sis=110) [2] r=-1 lpr=110 pi=[57,110)/1 crt=68'487 lcod 68'486 unknown NOTIFY mbc={}] exit Started 1.089341 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 1662976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:50.698803+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 1662976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:51.698938+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 1654784 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 776331 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 111 heartbeat osd_stat(store_statfs(0x4fced6000/0x0/0x4ffc00000, data 0xa729d/0x154000, compress 0x0/0x0/0x0, omap 0x13707, meta 0x2bbc8f9), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:52.699094+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.205221176s of 11.264312744s, submitted: 34
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c(unlocked)] enter Initial
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=0 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000136 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=0 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000038 1 0.000062
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000014 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000188 1 0.000055
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000031 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000232 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 112 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72712192 unmapped: 1646592 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.688391 2 0.000062
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.688744 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.688930 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=112) [0] r=0 lpr=112 pi=[83,112)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000183 1 0.000477
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000048 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 113 pg[9.1c( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 113 handle_osd_map epochs [113,113], i have 113, src has [1,113]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 5.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:53.699240+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 73 sent 71 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:23.502624+0000 osd.0 (osd.0) 72 : cluster [DBG] 5.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:23.513206+0000 osd.0 (osd.0) 73 : cluster [DBG] 5.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 1630208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] exit Started/Stray 1.006533 5 0.000149
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 lc 0'0 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=83/83 les/c/f=84/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 crt=68'487 remapped NOTIFY m=9 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 73)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:23.502624+0000 osd.0 (osd.0) 72 : cluster [DBG] 5.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:23.513206+0000 osd.0 (osd.0) 73 : cluster [DBG] 5.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.003077 4 0.000120
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000076 1 0.000055
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 lc 39'125 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 lcod 0'0 active+remapped m=9 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.064028 1 0.000116
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 114 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:54.699420+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 1662976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.961816 1 0.000094
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started/ReplicaActive 1.029227 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 active+remapped mbc={}] exit Started 2.035857 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=113) [0]/[2] r=-1 lpr=113 pi=[83,113)/1 pct=0'0 crt=68'487 active+remapped mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 pct=0'0 crt=68'487 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Reset 0.000094 1 0.000181
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000039 1 0.000045
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=0/0 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Jan 20 19:27:29 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001618 3 0.000048
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000026 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 115 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:55.699554+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 1662976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 115 handle_osd_map epochs [115,116], i have 116, src has [1,116]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 0.998527 2 0.000230
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.000488 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=113/114 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=115/116 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=115/116 n=6 ec=49/33 lis/c=113/83 les/c/f=114/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=115/116 n=6 ec=49/33 lis/c=115/83 les/c/f=116/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.003626 3 0.000480
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=115/116 n=6 ec=49/33 lis/c=115/83 les/c/f=116/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=115/116 n=6 ec=49/33 lis/c=115/83 les/c/f=116/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000020 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1c( v 68'487 (0'0,68'487] local-lis/les=115/116 n=6 ec=49/33 lis/c=115/83 les/c/f=116/84/0 sis=115) [0] r=0 lpr=115 pi=[83,115)/1 crt=68'487 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 116 handle_osd_map epochs [116,116], i have 116, src has [1,116]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e(unlocked)] enter Initial
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=0 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000122 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=0 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000009 1 0.000021
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000005 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000167 1 0.000039
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000221 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 116 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:56.699676+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 1662976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 810133 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 116 handle_osd_map epochs [117,117], i have 116, src has [1,117]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 116 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.855689 2 0.000067
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.855951 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.855977 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=116) [0] r=0 lpr=116 pi=[67,116)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped NOTIFY mbc={}] exit Reset 0.000102 1 0.000161
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped NOTIFY mbc={}] exit Start 0.000007 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 117 pg[9.1e( empty local-lis/les=0/0 n=0 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=0'0 remapped NOTIFY mbc={}] enter Started/Stray
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 117 heartbeat osd_stat(store_statfs(0x4fcec2000/0x0/0x4ffc00000, data 0xafc21/0x166000, compress 0x0/0x0/0x0, omap 0x143e6, meta 0x2bbbc1a), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:57.699797+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72695808 unmapped: 1662976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:58.699925+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72704000 unmapped: 1654784 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 117 handle_osd_map epochs [118,118], i have 117, src has [1,118]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] exit Started/Stray 1.922932 5 0.000061
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 lc 0'0 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=67/67 les/c/f=68/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 crt=68'485 remapped NOTIFY m=6 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 lc 39'299 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.002661 4 0.000129
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 lc 39'299 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 lc 39'299 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000046 1 0.000060
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 lc 39'299 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 lcod 0'0 active+remapped m=6 mbc={}] enter Started/ReplicaActive/RepRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.042911 1 0.000054
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 118 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.052593 1 0.000101
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started/ReplicaActive 0.098561 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 active+remapped mbc={}] exit Started 2.021566 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=117) [0]/[2] r=-1 lpr=117 pi=[67,117)/1 pct=0'0 crt=68'485 active+remapped mbc={}] enter Reset
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 pct=0'0 crt=68'485 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4541880224203014143 upacting 4541880224203014143
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] exit Reset 0.002474 1 0.002758
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Start
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] exit Start 0.000661 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000470 2 0.000850
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=0/0 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Jan 20 19:27:29 compute-0 ceph-osd[86022]: merge_log_dups log.dups.size()=0olog.dups.size()=15
Jan 20 19:27:29 compute-0 ceph-osd[86022]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=15
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000799 2 0.000081
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000006 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 119 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:05:59.700068+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72720384 unmapped: 1638400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.007168 2 0.000102
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.008557 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=117/118 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 120 handle_osd_map epochs [119,120], i have 120, src has [1,120]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=117/67 les/c/f=118/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/67 les/c/f=120/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004248 3 0.000179
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/67 les/c/f=120/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/67 les/c/f=120/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000010 0 0.000000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 pg_epoch: 120 pg[9.1e( v 68'485 (0'0,68'485] local-lis/les=119/120 n=6 ec=49/33 lis/c=119/67 les/c/f=120/68/0 sis=119) [0] r=0 lpr=119 pi=[67,119)/1 crt=68'485 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 120 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:00.700275+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72728576 unmapped: 1630208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 120 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb6805/0x173000, compress 0x0/0x0/0x0, omap 0x14e56, meta 0x2bbb1aa), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 120 handle_osd_map epochs [121,121], i have 120, src has [1,121]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 120 handle_osd_map epochs [121,121], i have 121, src has [1,121]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:01.700416+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72761344 unmapped: 1597440 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 834549 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:02.700574+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fceae000/0x0/0x4ffc00000, data 0xb8269/0x176000, compress 0x0/0x0/0x0, omap 0x1505c, meta 0x2bbafa4), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 121 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.898245811s of 10.000290871s, submitted: 54
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _renew_subs
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 1581056 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:03.700716+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72777728 unmapped: 1581056 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:04.700823+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72794112 unmapped: 1564672 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:05.701002+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 75 sent 73 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:35.583934+0000 osd.0 (osd.0) 74 : cluster [DBG] 11.10 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:35.594538+0000 osd.0 (osd.0) 75 : cluster [DBG] 11.10 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72802304 unmapped: 1556480 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 75)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:35.583934+0000 osd.0 (osd.0) 74 : cluster [DBG] 11.10 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:35.594538+0000 osd.0 (osd.0) 75 : cluster [DBG] 11.10 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:06.701223+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 1548288 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 837482 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:07.701455+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72810496 unmapped: 1548288 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:08.701601+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1540096 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:09.701726+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 77 sent 75 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:39.590960+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.1f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:39.601603+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.1f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1540096 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:10.701937+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 4 last_log 79 sent 77 num 4 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:40.555705+0000 osd.0 (osd.0) 78 : cluster [DBG] 3.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:27:29 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Jan 20 19:27:29 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:40.566066+0000 osd.0 (osd.0) 79 : cluster [DBG] 3.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 77)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:39.590960+0000 osd.0 (osd.0) 76 : cluster [DBG] 7.1f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:39.601603+0000 osd.0 (osd.0) 77 : cluster [DBG] 7.1f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72818688 unmapped: 1540096 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:11.702128+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 79)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:40.555705+0000 osd.0 (osd.0) 78 : cluster [DBG] 3.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:40.566066+0000 osd.0 (osd.0) 79 : cluster [DBG] 3.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1531904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 842308 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:12.702292+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1531904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.742737770s of 10.787989616s, submitted: 8
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:13.702503+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 81 sent 79 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:43.565558+0000 osd.0 (osd.0) 80 : cluster [DBG] 8.10 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:43.576095+0000 osd.0 (osd.0) 81 : cluster [DBG] 8.10 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72826880 unmapped: 1531904 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:14.702754+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 81)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:43.565558+0000 osd.0 (osd.0) 80 : cluster [DBG] 8.10 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:43.576095+0000 osd.0 (osd.0) 81 : cluster [DBG] 8.10 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72851456 unmapped: 1507328 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:15.702896+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 1499136 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:16.703054+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72859648 unmapped: 1499136 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844721 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:17.703197+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72867840 unmapped: 1490944 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:18.703352+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1482752 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:19.703568+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1482752 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:20.703733+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72876032 unmapped: 1482752 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:21.703892+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1474560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844721 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:22.704044+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 83 sent 81 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:52.570117+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:52.580686+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 83)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:52.570117+0000 osd.0 (osd.0) 82 : cluster [DBG] 3.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:52.580686+0000 osd.0 (osd.0) 83 : cluster [DBG] 3.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72884224 unmapped: 1474560 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:23.704270+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1466368 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:24.704410+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72892416 unmapped: 1466368 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:25.704618+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1449984 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.901047707s of 13.030266762s, submitted: 4
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:26.704779+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 85 sent 83 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:56.595817+0000 osd.0 (osd.0) 84 : cluster [DBG] 8.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:56.606394+0000 osd.0 (osd.0) 85 : cluster [DBG] 8.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 85)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:56.595817+0000 osd.0 (osd.0) 84 : cluster [DBG] 8.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:56.606394+0000 osd.0 (osd.0) 85 : cluster [DBG] 8.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1449984 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 849543 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:27.704980+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 87 sent 85 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:57.548274+0000 osd.0 (osd.0) 86 : cluster [DBG] 7.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:57.558861+0000 osd.0 (osd.0) 87 : cluster [DBG] 7.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 87)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:57.548274+0000 osd.0 (osd.0) 86 : cluster [DBG] 7.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:57.558861+0000 osd.0 (osd.0) 87 : cluster [DBG] 7.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72908800 unmapped: 1449984 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:28.705184+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72916992 unmapped: 1441792 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:29.705464+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 89 sent 87 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:59.527780+0000 osd.0 (osd.0) 88 : cluster [DBG] 10.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:06:59.538285+0000 osd.0 (osd.0) 89 : cluster [DBG] 10.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72925184 unmapped: 1433600 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 89)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:59.527780+0000 osd.0 (osd.0) 88 : cluster [DBG] 10.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:06:59.538285+0000 osd.0 (osd.0) 89 : cluster [DBG] 10.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:30.705689+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 91 sent 89 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:00.526961+0000 osd.0 (osd.0) 90 : cluster [DBG] 11.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:00.537524+0000 osd.0 (osd.0) 91 : cluster [DBG] 11.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 1425408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 91)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:00.526961+0000 osd.0 (osd.0) 90 : cluster [DBG] 11.4 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:00.537524+0000 osd.0 (osd.0) 91 : cluster [DBG] 11.4 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:31.705880+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72933376 unmapped: 1425408 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856782 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:32.706016+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1417216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:33.706149+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72941568 unmapped: 1417216 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:34.706283+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72957952 unmapped: 1400832 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:35.706427+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1392640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:36.706550+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1392640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 856782 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:37.706699+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72966144 unmapped: 1392640 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:38.707474+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72974336 unmapped: 1384448 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.785793304s of 12.800580025s, submitted: 8
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:39.707599+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 93 sent 91 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:09.396318+0000 osd.0 (osd.0) 92 : cluster [DBG] 2.11 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:09.406908+0000 osd.0 (osd.0) 93 : cluster [DBG] 2.11 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 93)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:09.396318+0000 osd.0 (osd.0) 92 : cluster [DBG] 2.11 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:09.406908+0000 osd.0 (osd.0) 93 : cluster [DBG] 2.11 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72990720 unmapped: 1368064 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:40.707792+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1359872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:41.707934+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 72998912 unmapped: 1359872 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859195 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:42.708104+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73007104 unmapped: 1351680 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:43.708233+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1343488 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:44.708357+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73015296 unmapped: 1343488 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:45.708525+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73023488 unmapped: 1335296 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:46.708651+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73031680 unmapped: 1327104 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 859195 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:47.708768+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 95 sent 93 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:17.475529+0000 osd.0 (osd.0) 94 : cluster [DBG] 3.c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:17.486109+0000 osd.0 (osd.0) 95 : cluster [DBG] 3.c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1318912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 95)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:17.475529+0000 osd.0 (osd.0) 94 : cluster [DBG] 3.c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:17.486109+0000 osd.0 (osd.0) 95 : cluster [DBG] 3.c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:48.708962+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 97 sent 95 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:18.498109+0000 osd.0 (osd.0) 96 : cluster [DBG] 3.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:18.508660+0000 osd.0 (osd.0) 97 : cluster [DBG] 3.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73039872 unmapped: 1318912 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 97)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:18.498109+0000 osd.0 (osd.0) 96 : cluster [DBG] 3.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:18.508660+0000 osd.0 (osd.0) 97 : cluster [DBG] 3.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:49.709197+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 1302528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:50.709479+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 1302528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:51.709637+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73056256 unmapped: 1302528 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 864017 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:52.709787+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1294336 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.060097694s of 14.073665619s, submitted: 6
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:53.709958+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 99 sent 97 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:23.470009+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:23.480405+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 99)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:23.470009+0000 osd.0 (osd.0) 98 : cluster [DBG] 7.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:23.480405+0000 osd.0 (osd.0) 99 : cluster [DBG] 7.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73064448 unmapped: 1294336 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:54.710155+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 1269760 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:55.710352+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 101 sent 99 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:25.448281+0000 osd.0 (osd.0) 100 : cluster [DBG] 11.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:25.458821+0000 osd.0 (osd.0) 101 : cluster [DBG] 11.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 101)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:25.448281+0000 osd.0 (osd.0) 100 : cluster [DBG] 11.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:25.458821+0000 osd.0 (osd.0) 101 : cluster [DBG] 11.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 1269760 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:56.710600+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73089024 unmapped: 1269760 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868845 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:57.710784+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73097216 unmapped: 1261568 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:58.710965+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 103 sent 101 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:28.476031+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:28.486668+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 103)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:28.476031+0000 osd.0 (osd.0) 102 : cluster [DBG] 7.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:28.486668+0000 osd.0 (osd.0) 103 : cluster [DBG] 7.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1253376 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:06:59.711137+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73105408 unmapped: 1253376 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:00.711292+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1245184 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:01.711469+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73113600 unmapped: 1245184 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 871256 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:02.711608+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73129984 unmapped: 1228800 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.995968819s of 10.006592751s, submitted: 6
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:03.711755+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 105 sent 103 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:33.476762+0000 osd.0 (osd.0) 104 : cluster [DBG] 2.13 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:33.487485+0000 osd.0 (osd.0) 105 : cluster [DBG] 2.13 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 105)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:33.476762+0000 osd.0 (osd.0) 104 : cluster [DBG] 2.13 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:33.487485+0000 osd.0 (osd.0) 105 : cluster [DBG] 2.13 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73154560 unmapped: 1204224 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:04.711972+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73162752 unmapped: 1196032 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:05.712143+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 107 sent 105 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:35.453996+0000 osd.0 (osd.0) 106 : cluster [DBG] 2.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:35.463904+0000 osd.0 (osd.0) 107 : cluster [DBG] 2.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 107)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:35.453996+0000 osd.0 (osd.0) 106 : cluster [DBG] 2.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:35.463904+0000 osd.0 (osd.0) 107 : cluster [DBG] 2.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73170944 unmapped: 1187840 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:06.712400+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 1179648 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 876082 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:07.712536+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 1179648 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:08.712652+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73179136 unmapped: 1179648 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:09.712785+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 109 sent 107 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:39.382532+0000 osd.0 (osd.0) 108 : cluster [DBG] 8.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:39.393017+0000 osd.0 (osd.0) 109 : cluster [DBG] 8.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 109)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:39.382532+0000 osd.0 (osd.0) 108 : cluster [DBG] 8.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:39.393017+0000 osd.0 (osd.0) 109 : cluster [DBG] 8.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1155072 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:10.713038+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73203712 unmapped: 1155072 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:11.713240+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 1146880 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Jan 20 19:27:29 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 878493 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:12.713390+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 1146880 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:13.713501+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 111 sent 109 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:43.446711+0000 osd.0 (osd.0) 110 : cluster [DBG] 3.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:43.457341+0000 osd.0 (osd.0) 111 : cluster [DBG] 3.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73211904 unmapped: 1146880 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.983028412s of 11.000847816s, submitted: 8
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 111)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:43.446711+0000 osd.0 (osd.0) 110 : cluster [DBG] 3.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:43.457341+0000 osd.0 (osd.0) 111 : cluster [DBG] 3.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:14.713606+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 113 sent 111 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:44.477664+0000 osd.0 (osd.0) 112 : cluster [DBG] 11.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:44.488224+0000 osd.0 (osd.0) 113 : cluster [DBG] 11.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1122304 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 113)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:44.477664+0000 osd.0 (osd.0) 112 : cluster [DBG] 11.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:44.488224+0000 osd.0 (osd.0) 113 : cluster [DBG] 11.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:15.713871+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1122304 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:16.714025+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73236480 unmapped: 1122304 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 883317 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:17.714155+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 115 sent 113 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:47.475192+0000 osd.0 (osd.0) 114 : cluster [DBG] 3.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:47.485828+0000 osd.0 (osd.0) 115 : cluster [DBG] 3.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1114112 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 115)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:47.475192+0000 osd.0 (osd.0) 114 : cluster [DBG] 3.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:47.485828+0000 osd.0 (osd.0) 115 : cluster [DBG] 3.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:18.714336+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73244672 unmapped: 1114112 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:19.714451+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1097728 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:20.714570+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73261056 unmapped: 1097728 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:21.714688+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1089536 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 885728 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:22.714811+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73269248 unmapped: 1089536 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:23.714966+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1081344 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:24.715114+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 117 sent 115 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:54.385507+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:07:54.396031+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 117)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:54.385507+0000 osd.0 (osd.0) 116 : cluster [DBG] 8.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:07:54.396031+0000 osd.0 (osd.0) 117 : cluster [DBG] 8.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1081344 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:25.715481+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73277440 unmapped: 1081344 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:26.715641+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1073152 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 888139 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:27.715779+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73285632 unmapped: 1073152 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:28.715948+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73293824 unmapped: 1064960 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:29.716157+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 1056768 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:30.716309+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73302016 unmapped: 1056768 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:31.716418+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1048576 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 888139 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:32.716539+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1048576 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:33.716686+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73310208 unmapped: 1048576 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:34.716836+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.837589264s of 20.850452423s, submitted: 6
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:35.717581+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 119 sent 117 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:05.327444+0000 osd.0 (osd.0) 118 : cluster [DBG] 11.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:05.341858+0000 osd.0 (osd.0) 119 : cluster [DBG] 11.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73342976 unmapped: 1015808 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 119)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:05.327444+0000 osd.0 (osd.0) 118 : cluster [DBG] 11.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:05.341858+0000 osd.0 (osd.0) 119 : cluster [DBG] 11.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:36.718565+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 890552 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73351168 unmapped: 1007616 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:37.718800+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73359360 unmapped: 999424 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:38.719065+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 121 sent 119 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:08.348915+0000 osd.0 (osd.0) 120 : cluster [DBG] 11.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:08.359332+0000 osd.0 (osd.0) 121 : cluster [DBG] 11.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 121)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:08.348915+0000 osd.0 (osd.0) 120 : cluster [DBG] 11.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:08.359332+0000 osd.0 (osd.0) 121 : cluster [DBG] 11.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73375744 unmapped: 983040 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:39.719336+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 123 sent 121 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:09.352953+0000 osd.0 (osd.0) 122 : cluster [DBG] 7.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:09.363485+0000 osd.0 (osd.0) 123 : cluster [DBG] 7.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 123)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:09.352953+0000 osd.0 (osd.0) 122 : cluster [DBG] 7.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:09.363485+0000 osd.0 (osd.0) 123 : cluster [DBG] 7.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73392128 unmapped: 966656 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:40.719620+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 125 sent 123 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:10.382901+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:10.393426+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 125)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:10.382901+0000 osd.0 (osd.0) 124 : cluster [DBG] 7.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:10.393426+0000 osd.0 (osd.0) 125 : cluster [DBG] 7.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73400320 unmapped: 958464 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:41.719816+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 127 sent 125 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:11.338066+0000 osd.0 (osd.0) 126 : cluster [DBG] 11.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:11.348612+0000 osd.0 (osd.0) 127 : cluster [DBG] 11.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 127)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:11.338066+0000 osd.0 (osd.0) 126 : cluster [DBG] 11.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:11.348612+0000 osd.0 (osd.0) 127 : cluster [DBG] 11.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 900200 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73408512 unmapped: 950272 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:42.720025+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73424896 unmapped: 933888 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:43.720159+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 129 sent 127 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:13.351923+0000 osd.0 (osd.0) 128 : cluster [DBG] 3.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:13.362492+0000 osd.0 (osd.0) 129 : cluster [DBG] 3.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 129)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:13.351923+0000 osd.0 (osd.0) 128 : cluster [DBG] 3.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:13.362492+0000 osd.0 (osd.0) 129 : cluster [DBG] 3.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73433088 unmapped: 925696 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:44.720442+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73441280 unmapped: 917504 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:45.720609+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:46.720929+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902613 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:47.721081+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73449472 unmapped: 909312 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:48.721216+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:49.721381+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73457664 unmapped: 901120 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:50.721505+0000)
Jan 20 19:27:29 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:51.721635+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 902613 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73465856 unmapped: 892928 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:52.721798+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73474048 unmapped: 884736 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:53.721922+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73482240 unmapped: 876544 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:54.722061+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:55.722265+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73490432 unmapped: 868352 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:56.722407+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 22.071825027s of 22.125001907s, submitted: 12
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 905024 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:57.722525+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 131 sent 129 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:27.453119+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:27.463716+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 131)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:27.453119+0000 osd.0 (osd.0) 130 : cluster [DBG] 3.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:27.463716+0000 osd.0 (osd.0) 131 : cluster [DBG] 3.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73498624 unmapped: 860160 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:58.723497+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 133 sent 131 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:28.431204+0000 osd.0 (osd.0) 132 : cluster [DBG] 7.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:28.441708+0000 osd.0 (osd.0) 133 : cluster [DBG] 7.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 133)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:28.431204+0000 osd.0 (osd.0) 132 : cluster [DBG] 7.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:28.441708+0000 osd.0 (osd.0) 133 : cluster [DBG] 7.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:07:59.723809+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:00.723932+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73506816 unmapped: 851968 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:01.724105+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 909848 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:02.724240+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 135 sent 133 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:32.475313+0000 osd.0 (osd.0) 134 : cluster [DBG] 7.13 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:32.485863+0000 osd.0 (osd.0) 135 : cluster [DBG] 7.13 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 135)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:32.475313+0000 osd.0 (osd.0) 134 : cluster [DBG] 7.13 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:32.485863+0000 osd.0 (osd.0) 135 : cluster [DBG] 7.13 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73523200 unmapped: 835584 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:03.724471+0000)
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:04.724599+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:05.724838+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73531392 unmapped: 827392 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:06.725027+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 137 sent 135 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:36.378934+0000 osd.0 (osd.0) 136 : cluster [DBG] 3.a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:36.389424+0000 osd.0 (osd.0) 137 : cluster [DBG] 3.a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 137)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:36.378934+0000 osd.0 (osd.0) 136 : cluster [DBG] 3.a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:36.389424+0000 osd.0 (osd.0) 137 : cluster [DBG] 3.a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 912259 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:07.727061+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73539584 unmapped: 819200 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.908113480s of 10.925184250s, submitted: 8
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:08.727285+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 139 sent 137 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:38.378302+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:38.388856+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73547776 unmapped: 811008 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 139)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:38.378302+0000 osd.0 (osd.0) 138 : cluster [DBG] 8.c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:38.388856+0000 osd.0 (osd.0) 139 : cluster [DBG] 8.c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:09.729636+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:10.729844+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73555968 unmapped: 802816 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:11.730054+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73564160 unmapped: 794624 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:12.731114+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 141 sent 139 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:42.351892+0000 osd.0 (osd.0) 140 : cluster [DBG] 3.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:42.362464+0000 osd.0 (osd.0) 141 : cluster [DBG] 3.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 917083 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 141)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:42.351892+0000 osd.0 (osd.0) 140 : cluster [DBG] 3.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:42.362464+0000 osd.0 (osd.0) 141 : cluster [DBG] 3.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:13.731757+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73580544 unmapped: 778240 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:14.731903+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 143 sent 141 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:44.365436+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.1f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:44.375999+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.1f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73596928 unmapped: 761856 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 143)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:44.365436+0000 osd.0 (osd.0) 142 : cluster [DBG] 8.1f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:44.375999+0000 osd.0 (osd.0) 143 : cluster [DBG] 8.1f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:15.732407+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 145 sent 143 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:45.415766+0000 osd.0 (osd.0) 144 : cluster [DBG] 8.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:45.426416+0000 osd.0 (osd.0) 145 : cluster [DBG] 8.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73613312 unmapped: 745472 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 145)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:45.415766+0000 osd.0 (osd.0) 144 : cluster [DBG] 8.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:45.426416+0000 osd.0 (osd.0) 145 : cluster [DBG] 8.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:16.732750+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 147 sent 145 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:46.377184+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:46.387894+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73621504 unmapped: 737280 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 147)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:46.377184+0000 osd.0 (osd.0) 146 : cluster [DBG] 11.19 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:46.387894+0000 osd.0 (osd.0) 147 : cluster [DBG] 11.19 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:17.733085+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 924324 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73629696 unmapped: 729088 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.033535004s of 10.056309700s, submitted: 10
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:18.733334+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 149 sent 147 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:48.434563+0000 osd.0 (osd.0) 148 : cluster [DBG] 8.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:48.445074+0000 osd.0 (osd.0) 149 : cluster [DBG] 8.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73646080 unmapped: 712704 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:19.733550+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 4 last_log 151 sent 149 num 4 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:49.459017+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.1a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:49.469231+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.1a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 149)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:48.434563+0000 osd.0 (osd.0) 148 : cluster [DBG] 8.18 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:48.445074+0000 osd.0 (osd.0) 149 : cluster [DBG] 8.18 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 151)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:49.459017+0000 osd.0 (osd.0) 150 : cluster [DBG] 8.1a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:49.469231+0000 osd.0 (osd.0) 151 : cluster [DBG] 8.1a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 696320 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:20.733731+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73662464 unmapped: 696320 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:21.733959+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 153 sent 151 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:51.538842+0000 osd.0 (osd.0) 152 : cluster [DBG] 8.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:51.549615+0000 osd.0 (osd.0) 153 : cluster [DBG] 8.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 153)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:51.538842+0000 osd.0 (osd.0) 152 : cluster [DBG] 8.14 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:51.549615+0000 osd.0 (osd.0) 153 : cluster [DBG] 8.14 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:22.734149+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931563 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:23.734319+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73670656 unmapped: 688128 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:24.734837+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 155 sent 153 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:54.634412+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:54.644964+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 155)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:54.634412+0000 osd.0 (osd.0) 154 : cluster [DBG] 11.17 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:54.644964+0000 osd.0 (osd.0) 155 : cluster [DBG] 11.17 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 679936 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:25.735062+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73678848 unmapped: 679936 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:26.735304+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:27.735610+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 933978 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.164458275s of 10.179276466s, submitted: 8
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:28.735747+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 157 sent 155 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:58.614004+0000 osd.0 (osd.0) 156 : cluster [DBG] 7.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:08:58.624468+0000 osd.0 (osd.0) 157 : cluster [DBG] 7.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 157)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:58.614004+0000 osd.0 (osd.0) 156 : cluster [DBG] 7.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:08:58.624468+0000 osd.0 (osd.0) 157 : cluster [DBG] 7.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73687040 unmapped: 671744 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:29.736013+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:30.736195+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73695232 unmapped: 663552 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:31.736440+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 159 sent 157 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:01.666824+0000 osd.0 (osd.0) 158 : cluster [DBG] 3.12 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:01.677452+0000 osd.0 (osd.0) 159 : cluster [DBG] 3.12 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 159)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:01.666824+0000 osd.0 (osd.0) 158 : cluster [DBG] 3.12 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:01.677452+0000 osd.0 (osd.0) 159 : cluster [DBG] 3.12 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:32.736643+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938804 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73703424 unmapped: 655360 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:33.736766+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:34.736911+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:35.737091+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73711616 unmapped: 647168 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:36.737354+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:37.737599+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 938804 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73719808 unmapped: 638976 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.014798164s of 10.024656296s, submitted: 4
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:38.737726+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 161 sent 159 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:08.638625+0000 osd.0 (osd.0) 160 : cluster [DBG] 10.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:08.652629+0000 osd.0 (osd.0) 161 : cluster [DBG] 10.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73728000 unmapped: 630784 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 161)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:08.638625+0000 osd.0 (osd.0) 160 : cluster [DBG] 10.e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:08.652629+0000 osd.0 (osd.0) 161 : cluster [DBG] 10.e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:39.737983+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 622592 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:40.738282+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 163 sent 161 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:10.673526+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:10.687564+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73736192 unmapped: 622592 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 163)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:10.673526+0000 osd.0 (osd.0) 162 : cluster [DBG] 10.d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:10.687564+0000 osd.0 (osd.0) 163 : cluster [DBG] 10.d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:41.738616+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:42.738841+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 943630 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:43.739018+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73744384 unmapped: 614400 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:44.739150+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:45.739402+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73752576 unmapped: 606208 heap: 74358784 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:46.739524+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 165 sent 163 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:16.654219+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:16.668284+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1630208 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:47.739729+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 165)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:16.654219+0000 osd.0 (osd.0) 164 : cluster [DBG] 10.15 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:16.668284+0000 osd.0 (osd.0) 165 : cluster [DBG] 10.15 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 946045 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1630208 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:48.739862+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1630208 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:49.740002+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1622016 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:50.740214+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73777152 unmapped: 1630208 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73785344 unmapped: 1622016 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:52.283609+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.977820396s of 13.988296509s, submitted: 6
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 948456 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1613824 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:53.283719+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 167 sent 165 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:22.626995+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:22.641107+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 167)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:22.626995+0000 osd.0 (osd.0) 166 : cluster [DBG] 8.6 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:22.641107+0000 osd.0 (osd.0) 167 : cluster [DBG] 8.6 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73793536 unmapped: 1613824 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:54.283869+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1605632 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:55.283998+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 169 sent 167 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:24.636859+0000 osd.0 (osd.0) 168 : cluster [DBG] 10.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:24.650952+0000 osd.0 (osd.0) 169 : cluster [DBG] 10.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 169)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:24.636859+0000 osd.0 (osd.0) 168 : cluster [DBG] 10.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:24.650952+0000 osd.0 (osd.0) 169 : cluster [DBG] 10.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1605632 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:56.284310+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73801728 unmapped: 1605632 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:57.284443+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 950869 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1597440 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:58.284620+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 8.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73809920 unmapped: 1597440 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:08:59.284817+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 171 sent 169 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:28.596239+0000 osd.0 (osd.0) 170 : cluster [DBG] 8.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:28.614084+0000 osd.0 (osd.0) 171 : cluster [DBG] 8.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 171)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:28.596239+0000 osd.0 (osd.0) 170 : cluster [DBG] 8.f scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:28.614084+0000 osd.0 (osd.0) 171 : cluster [DBG] 8.f scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1581056 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:00.285086+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1581056 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:01.285276+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 173 sent 171 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:30.678799+0000 osd.0 (osd.0) 172 : cluster [DBG] 6.a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:30.689456+0000 osd.0 (osd.0) 173 : cluster [DBG] 6.a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 173)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:30.678799+0000 osd.0 (osd.0) 172 : cluster [DBG] 6.a scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:30.689456+0000 osd.0 (osd.0) 173 : cluster [DBG] 6.a scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73826304 unmapped: 1581056 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:02.285640+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 955691 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73834496 unmapped: 1572864 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:03.285839+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73842688 unmapped: 1564672 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:04.286069+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73850880 unmapped: 1556480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:05.286248+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.978474617s of 12.995471954s, submitted: 8
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73867264 unmapped: 1540096 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:06.286486+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 175 sent 173 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:35.622466+0000 osd.0 (osd.0) 174 : cluster [DBG] 6.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:35.639978+0000 osd.0 (osd.0) 175 : cluster [DBG] 6.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 175)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:35.622466+0000 osd.0 (osd.0) 174 : cluster [DBG] 6.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:35.639978+0000 osd.0 (osd.0) 175 : cluster [DBG] 6.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1531904 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:07.286705+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958102 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1531904 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:08.286852+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73875456 unmapped: 1531904 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:09.287038+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:10.287196+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73883648 unmapped: 1523712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:11.287407+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:12.287568+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958102 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73891840 unmapped: 1515520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:13.287701+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:14.287855+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73900032 unmapped: 1507328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:15.288014+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:16.288173+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 177 sent 175 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:45.500879+0000 osd.0 (osd.0) 176 : cluster [DBG] 6.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:45.511396+0000 osd.0 (osd.0) 177 : cluster [DBG] 6.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 177)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:45.500879+0000 osd.0 (osd.0) 176 : cluster [DBG] 6.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:45.511396+0000 osd.0 (osd.0) 177 : cluster [DBG] 6.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73916416 unmapped: 1490944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:17.288383+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.815945625s of 11.823007584s, submitted: 4
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962924 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73924608 unmapped: 1482752 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:18.288565+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 179 sent 177 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:47.445529+0000 osd.0 (osd.0) 178 : cluster [DBG] 6.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:47.459765+0000 osd.0 (osd.0) 179 : cluster [DBG] 6.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 179)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:47.445529+0000 osd.0 (osd.0) 178 : cluster [DBG] 6.7 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:47.459765+0000 osd.0 (osd.0) 179 : cluster [DBG] 6.7 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73940992 unmapped: 1466368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:19.288849+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 181 sent 179 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:48.431265+0000 osd.0 (osd.0) 180 : cluster [DBG] 6.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:48.448925+0000 osd.0 (osd.0) 181 : cluster [DBG] 6.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 181)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:48.431265+0000 osd.0 (osd.0) 180 : cluster [DBG] 6.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:48.448925+0000 osd.0 (osd.0) 181 : cluster [DBG] 6.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1449984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:20.289153+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73957376 unmapped: 1449984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:21.289417+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:22.289651+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965335 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:23.289879+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73965568 unmapped: 1441792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:24.290138+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1417216 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:25.290284+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 183 sent 181 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:54.414928+0000 osd.0 (osd.0) 182 : cluster [DBG] 6.0 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:54.439872+0000 osd.0 (osd.0) 183 : cluster [DBG] 6.0 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 183)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:54.414928+0000 osd.0 (osd.0) 182 : cluster [DBG] 6.0 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:54.439872+0000 osd.0 (osd.0) 183 : cluster [DBG] 6.0 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73990144 unmapped: 1417216 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:26.290796+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:27.291033+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.958466530s of 10.302054405s, submitted: 7
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970159 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 73998336 unmapped: 1409024 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:28.291253+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 185 sent 183 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:57.416718+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.11 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:57.451999+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.11 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:29.291464+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 4 last_log 187 sent 185 num 4 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:58.452896+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:09:58.477630+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 185)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:57.416718+0000 osd.0 (osd.0) 184 : cluster [DBG] 9.11 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:57.451999+0000 osd.0 (osd.0) 185 : cluster [DBG] 9.11 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 187)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:58.452896+0000 osd.0 (osd.0) 186 : cluster [DBG] 9.b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:09:58.477630+0000 osd.0 (osd.0) 187 : cluster [DBG] 9.b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74006528 unmapped: 1400832 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:30.291638+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1392640 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:31.291851+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 189 sent 187 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:00.436102+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:00.460751+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 189)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:00.436102+0000 osd.0 (osd.0) 188 : cluster [DBG] 9.16 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:00.460751+0000 osd.0 (osd.0) 189 : cluster [DBG] 9.16 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1392640 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:32.292075+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974983 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74014720 unmapped: 1392640 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:33.292240+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74031104 unmapped: 1376256 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:34.292435+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 191 sent 189 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:03.423688+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:03.465482+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 191)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:03.423688+0000 osd.0 (osd.0) 190 : cluster [DBG] 9.5 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:03.465482+0000 osd.0 (osd.0) 191 : cluster [DBG] 9.5 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74039296 unmapped: 1368064 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:35.292596+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1359872 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:36.292777+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74047488 unmapped: 1359872 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:37.292888+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979805 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74063872 unmapped: 1343488 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:38.293060+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 193 sent 191 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:07.346500+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:07.374401+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 193)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:07.346500+0000 osd.0 (osd.0) 192 : cluster [DBG] 9.9 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:07.374401+0000 osd.0 (osd.0) 193 : cluster [DBG] 9.9 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1335296 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:39.293296+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74072064 unmapped: 1335296 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:40.293514+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.255956650s of 12.659899712s, submitted: 9
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74080256 unmapped: 1327104 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:41.293740+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 195 sent 193 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:10.407531+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:10.446351+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 195)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:10.407531+0000 osd.0 (osd.0) 194 : cluster [DBG] 9.d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:10.446351+0000 osd.0 (osd.0) 195 : cluster [DBG] 9.d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74104832 unmapped: 1302528 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:42.293960+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 197 sent 195 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:11.379425+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:11.421778+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 197)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:11.379425+0000 osd.0 (osd.0) 196 : cluster [DBG] 9.1 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:11.421778+0000 osd.0 (osd.0) 197 : cluster [DBG] 9.1 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 987038 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74113024 unmapped: 1294336 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:43.294205+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 199 sent 197 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:12.367063+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:12.405820+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 199)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:12.367063+0000 osd.0 (osd.0) 198 : cluster [DBG] 9.3 scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:12.405820+0000 osd.0 (osd.0) 199 : cluster [DBG] 9.3 scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74121216 unmapped: 1286144 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:44.294487+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74129408 unmapped: 1277952 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:45.294702+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74137600 unmapped: 1269760 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:46.295021+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74153984 unmapped: 1253376 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:47.295206+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 201 sent 199 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:16.328900+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:16.371406+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 201)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:16.328900+0000 osd.0 (osd.0) 200 : cluster [DBG] 9.1c scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:16.371406+0000 osd.0 (osd.0) 201 : cluster [DBG] 9.1c scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 989451 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74162176 unmapped: 1245184 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:48.295417+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1236992 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:49.295634+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74170368 unmapped: 1236992 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:50.295797+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1228800 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:51.295920+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74178560 unmapped: 1228800 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:52.296125+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.937269211s of 11.953340530s, submitted: 8
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 991864 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74194944 unmapped: 1212416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:53.296299+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 203 sent 201 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:22.360910+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:22.389194+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 203)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:22.360910+0000 osd.0 (osd.0) 202 : cluster [DBG] 9.1d scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:22.389194+0000 osd.0 (osd.0) 203 : cluster [DBG] 9.1d scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1187840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:54.296516+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 205 sent 203 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:23.397001+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:23.418268+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 205)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:23.397001+0000 osd.0 (osd.0) 204 : cluster [DBG] 9.1b scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:23.418268+0000 osd.0 (osd.0) 205 : cluster [DBG] 9.1b scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74219520 unmapped: 1187840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:55.296705+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74227712 unmapped: 1179648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:56.297007+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  log_queue is 2 last_log 207 sent 205 num 2 unsent 2 sending 2
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:25.358337+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  will send 2026-01-20T19:10:25.390126+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client handle_log_ack log(last 207)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:25.358337+0000 osd.0 (osd.0) 206 : cluster [DBG] 9.1e scrub starts
Jan 20 19:27:29 compute-0 ceph-osd[86022]: log_client  logged 2026-01-20T19:10:25.390126+0000 osd.0 (osd.0) 207 : cluster [DBG] 9.1e scrub ok
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1171456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:57.297221+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74235904 unmapped: 1171456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:58.297345+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1163264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:09:59.297421+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1163264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:00.297603+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74244096 unmapped: 1163264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:01.297788+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:02.297909+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:03.298063+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1146880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:04.298183+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:05.298426+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74252288 unmapped: 1155072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:06.298606+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1146880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:07.298771+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74260480 unmapped: 1146880 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:08.298920+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1138688 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:09.299044+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1138688 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:10.299172+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74268672 unmapped: 1138688 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:11.299334+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:12.299494+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:13.299709+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74276864 unmapped: 1130496 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:14.299873+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1122304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:15.300049+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74285056 unmapped: 1122304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:16.300246+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:17.300466+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74293248 unmapped: 1114112 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:18.301336+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:19.301953+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1105920 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:20.302445+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1105920 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:21.303062+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74301440 unmapped: 1105920 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:22.303229+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74309632 unmapped: 1097728 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:23.303378+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74317824 unmapped: 1089536 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:24.303852+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:25.304001+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:26.304466+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74326016 unmapped: 1081344 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:27.304750+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:28.305479+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74334208 unmapped: 1073152 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:29.306018+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1064960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:30.306397+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1064960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:31.306537+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74342400 unmapped: 1064960 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:32.306751+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:33.306915+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:34.307064+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74350592 unmapped: 1056768 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:35.307231+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:36.307437+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74358784 unmapped: 1048576 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:37.307550+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74366976 unmapped: 1040384 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:38.307729+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74375168 unmapped: 1032192 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:39.307898+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74383360 unmapped: 1024000 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:40.308230+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:41.308385+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74391552 unmapped: 1015808 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:42.308503+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:43.308669+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74399744 unmapped: 1007616 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:44.308807+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74407936 unmapped: 999424 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:45.308957+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 983040 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:46.309175+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74424320 unmapped: 983040 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:47.309302+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:48.309454+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:49.309649+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74432512 unmapped: 974848 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:50.309781+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 966656 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:51.309986+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74440704 unmapped: 966656 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:52.310173+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 958464 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:53.310292+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74448896 unmapped: 958464 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:54.310449+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 950272 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:55.310614+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 950272 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:56.310828+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74457088 unmapped: 950272 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:57.310970+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 942080 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:58.311141+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74465280 unmapped: 942080 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:10:59.311327+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74473472 unmapped: 933888 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:00.311441+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 925696 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:01.311570+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74481664 unmapped: 925696 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:02.311687+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 917504 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:03.311851+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 917504 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:04.312018+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74489856 unmapped: 917504 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:05.312157+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 909312 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:06.312333+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74498048 unmapped: 909312 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:07.312495+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 901120 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:08.312616+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 901120 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:09.312771+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74506240 unmapped: 901120 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:10.312975+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 884736 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:11.313124+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74522624 unmapped: 884736 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:12.313244+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 876544 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:13.313390+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 876544 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:14.313531+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74530816 unmapped: 876544 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:15.313656+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 868352 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:16.313836+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 868352 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:17.313991+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74539008 unmapped: 868352 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:18.314183+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 860160 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:19.314410+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74547200 unmapped: 860160 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:20.314553+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 843776 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:21.314708+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74563584 unmapped: 843776 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:22.314952+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 835584 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:23.315355+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 835584 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:24.315541+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74571776 unmapped: 835584 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:25.315892+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 827392 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:26.316123+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74579968 unmapped: 827392 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:27.316986+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 819200 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:28.317144+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 819200 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:29.317300+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74588160 unmapped: 819200 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:30.317601+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 811008 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:31.317719+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74596352 unmapped: 811008 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:32.317928+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 802816 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:33.318071+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 802816 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:34.318269+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74604544 unmapped: 802816 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:35.318485+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 794624 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:36.318709+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74612736 unmapped: 794624 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:37.318848+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 786432 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:38.318982+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74620928 unmapped: 786432 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:39.319088+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 778240 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:40.319264+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 778240 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:41.319421+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74629120 unmapped: 778240 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:42.319520+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 770048 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:43.319624+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74637312 unmapped: 770048 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:44.319733+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74645504 unmapped: 761856 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:45.319858+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 753664 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:46.320010+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74653696 unmapped: 753664 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:47.320153+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 745472 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:48.320324+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74661888 unmapped: 745472 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:49.320451+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 737280 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:50.320584+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 737280 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:51.320726+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74670080 unmapped: 737280 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:52.320838+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 729088 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:53.320969+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 729088 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:54.321091+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74678272 unmapped: 729088 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:55.321266+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74686464 unmapped: 720896 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:56.321445+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 712704 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:57.321666+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 712704 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:58.321853+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74694656 unmapped: 712704 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:11:59.322015+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:00.322174+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:01.322320+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74711040 unmapped: 696320 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:02.322481+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:03.322610+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74719232 unmapped: 688128 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:04.322768+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74752000 unmapped: 655360 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:05.322892+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 647168 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:06.323064+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74760192 unmapped: 647168 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:07.323284+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 638976 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:08.323440+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 638976 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:09.323599+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74768384 unmapped: 638976 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:10.323757+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 622592 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:11.323906+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74784768 unmapped: 622592 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:12.324061+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 614400 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:13.324185+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 614400 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:14.324325+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74792960 unmapped: 614400 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:15.324497+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 598016 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:16.324752+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 598016 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:17.324965+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74809344 unmapped: 598016 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:18.325108+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 589824 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:19.325214+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74817536 unmapped: 589824 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:20.325331+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74825728 unmapped: 581632 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:21.325448+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 573440 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:22.325577+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74833920 unmapped: 573440 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:23.325698+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 565248 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:24.325810+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 565248 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:25.326050+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74842112 unmapped: 565248 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:26.326210+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 557056 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:27.326483+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74850304 unmapped: 557056 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:28.326758+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 548864 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:29.326903+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74858496 unmapped: 548864 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:30.327090+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 540672 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:31.327295+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 540672 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:32.327506+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74866688 unmapped: 540672 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:33.327628+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 532480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:34.327746+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74874880 unmapped: 532480 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:35.327879+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74883072 unmapped: 524288 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:36.328108+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 516096 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:37.328269+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74891264 unmapped: 516096 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:38.328422+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74899456 unmapped: 507904 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:39.328593+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 499712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:40.328738+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74907648 unmapped: 499712 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:41.328875+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 491520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:42.328991+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74915840 unmapped: 491520 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:43.329278+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 483328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:44.329434+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 483328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:45.329548+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74924032 unmapped: 483328 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:46.329694+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 475136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:47.329835+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74932224 unmapped: 475136 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:48.330051+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74940416 unmapped: 466944 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:49.330188+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:50.330327+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74956800 unmapped: 450560 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:51.330572+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 442368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:52.330703+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 442368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:53.330870+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74964992 unmapped: 442368 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:54.330998+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:55.331217+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:56.331453+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74973184 unmapped: 434176 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:57.331599+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:58.331788+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:12:59.331934+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74981376 unmapped: 425984 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:00.332067+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 74989568 unmapped: 417792 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:01.332201+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Cumulative writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s
                                           Cumulative WAL: 5615 writes, 879 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 5615 writes, 24K keys, 5615 commit groups, 1.0 writes per commit group, ingest: 18.71 MB, 0.03 MB/s
                                           Interval WAL: 5615 writes, 879 syncs, 6.39 writes per sync, written: 0.02 GB, 0.03 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 600.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 3.4e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75046912 unmapped: 360448 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:02.332340+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 352256 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:03.332507+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75055104 unmapped: 352256 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:04.332633+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 344064 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:05.332800+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 344064 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:06.333011+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75063296 unmapped: 344064 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:07.333165+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 335872 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:08.333354+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75071488 unmapped: 335872 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:09.333554+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 319488 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:10.333674+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 319488 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:11.333787+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75087872 unmapped: 319488 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:12.333912+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 311296 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:13.334068+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 311296 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:14.334149+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75096064 unmapped: 311296 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:15.334314+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 286720 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:16.334664+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75120640 unmapped: 286720 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:17.334815+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 278528 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:18.334949+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75128832 unmapped: 278528 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:19.335183+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 270336 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:20.335330+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 270336 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:21.335458+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75137024 unmapped: 270336 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:22.335599+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 262144 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:23.335717+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75145216 unmapped: 262144 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:24.335873+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 245760 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:25.336037+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75153408 unmapped: 253952 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:26.336234+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 245760 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:27.336382+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 245760 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:28.336534+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75161600 unmapped: 245760 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:29.337119+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 237568 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:30.337265+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 237568 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:31.337343+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75169792 unmapped: 237568 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:32.337521+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 229376 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:33.337594+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75177984 unmapped: 229376 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:34.337720+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75186176 unmapped: 221184 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 sudo[247530]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:35.337838+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 204800 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:36.337994+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75202560 unmapped: 204800 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:37.338115+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 196608 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:38.338245+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75210752 unmapped: 196608 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:39.338467+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75218944 unmapped: 188416 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:40.338632+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 180224 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:41.338781+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 180224 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:42.338917+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75227136 unmapped: 180224 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:43.339095+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75235328 unmapped: 172032 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:44.339263+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75243520 unmapped: 163840 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:45.339415+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 155648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:46.339625+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75251712 unmapped: 155648 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:47.339728+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 147456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:48.339873+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75259904 unmapped: 147456 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:49.340085+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75268096 unmapped: 139264 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:50.340246+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 131072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 sudo[247530]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:51.340393+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75276288 unmapped: 131072 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:52.340516+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 240.680923462s of 240.693923950s, submitted: 6
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 75309056 unmapped: 98304 heap: 75407360 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:53.340650+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 76914688 unmapped: 589824 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:54.340750+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77004800 unmapped: 499712 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:55.340863+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:56.341211+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:57.341347+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:58.341467+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:13:59.341637+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:00.341766+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:01.341943+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:02.342066+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:03.342208+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:04.342349+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77021184 unmapped: 483328 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:05.342510+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 450560 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:06.342727+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77053952 unmapped: 450560 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:07.342887+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 442368 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:08.343036+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 442368 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:09.343459+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77062144 unmapped: 442368 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:10.343607+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:11.343750+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77086720 unmapped: 417792 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 sudo[247530]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:12.343883+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:13.344094+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:14.344225+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77094912 unmapped: 409600 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:15.344354+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 376832 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:16.344735+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77127680 unmapped: 376832 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:17.344850+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 368640 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:18.345026+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 368640 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:19.345165+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77135872 unmapped: 368640 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:20.345325+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 352256 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:21.345501+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77152256 unmapped: 352256 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:22.345649+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 344064 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:23.345765+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77160448 unmapped: 344064 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:24.345925+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 335872 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:25.346068+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 335872 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:26.346206+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77168640 unmapped: 335872 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:27.346353+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 327680 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:28.346571+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 327680 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:29.346686+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77176832 unmapped: 327680 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:30.346826+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 311296 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:31.346963+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77193216 unmapped: 311296 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:32.347094+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 303104 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:33.347432+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77201408 unmapped: 303104 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:34.347557+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77209600 unmapped: 294912 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:35.347721+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 278528 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:36.347997+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77225984 unmapped: 278528 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:37.348225+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 270336 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:38.348405+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77234176 unmapped: 270336 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:39.348535+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 262144 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:40.348727+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 262144 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:41.348929+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 262144 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:42.349100+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:43.349276+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:44.349407+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 245760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:45.349534+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 245760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:46.349706+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77258752 unmapped: 245760 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:47.349855+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 237568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:48.350005+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 237568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:49.350142+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:50.350485+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:51.350591+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:52.350697+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:53.350809+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:54.350922+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77291520 unmapped: 212992 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:55.351026+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:56.351167+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:57.351320+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:58.351491+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 163840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:14:59.351649+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 163840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:00.351776+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77348864 unmapped: 155648 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:01.351892+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:02.352002+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:03.352114+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:04.352231+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:05.352334+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:06.352500+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:07.352603+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:08.352697+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:09.352822+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:10.352988+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:11.353151+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:12.353425+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:13.353552+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:14.353684+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:15.353830+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:16.354037+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:17.354158+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:18.354283+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:19.354415+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:20.354571+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:21.354766+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:22.354976+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:23.355159+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:24.355333+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:25.355523+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:26.355742+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:27.355911+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:28.356052+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:29.356181+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:30.356331+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:31.356581+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:32.356753+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:33.356909+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:34.357080+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:35.357257+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 90112 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:36.357447+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 90112 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:37.357610+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 90112 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:38.357780+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 90112 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:39.357931+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77414400 unmapped: 90112 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:40.358077+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 81920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:41.358210+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 81920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:42.358430+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 81920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:43.358603+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 81920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:44.358843+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77422592 unmapped: 81920 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:45.358980+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 65536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:46.359244+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 65536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:47.359500+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 65536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:48.359641+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 65536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:49.359812+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77438976 unmapped: 65536 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:50.359939+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 262144 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:51.360065+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 262144 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:52.360272+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 262144 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:53.360487+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 262144 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:54.360771+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77242368 unmapped: 262144 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:55.360913+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:56.361107+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:57.361303+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:58.361445+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:15:59.361706+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:00.361902+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:01.362067+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:02.362202+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:03.362399+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77250560 unmapped: 253952 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:04.362516+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77266944 unmapped: 237568 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:05.362737+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:06.362954+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:07.363066+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:08.363190+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:09.363295+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:10.363424+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:11.363546+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:12.363669+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:13.363807+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:14.363952+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:15.364089+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:16.364259+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:17.364395+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:18.364574+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:19.364708+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:20.364827+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:21.364954+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 229376 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:22.365079+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:23.365185+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:24.365320+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:25.365415+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:26.388742+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:27.388860+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:28.389069+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:29.389207+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:30.389349+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:31.389490+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:32.389678+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:33.389813+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:34.389995+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77283328 unmapped: 221184 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:35.390143+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 204800 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:36.390317+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 204800 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:37.390425+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 204800 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:38.390554+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 204800 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:39.390705+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77299712 unmapped: 204800 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:40.390833+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 196608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:41.390971+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 196608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:42.391170+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 196608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:43.391424+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77307904 unmapped: 196608 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:44.391676+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77316096 unmapped: 188416 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:45.391892+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:46.392185+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:47.392396+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:48.392603+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:49.392775+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:50.392902+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:51.393052+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:52.393235+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:53.393442+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:54.393591+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77324288 unmapped: 180224 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:55.393735+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:56.393964+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:57.394150+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:58.394333+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:16:59.394450+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:00.394581+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:01.394715+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:02.394854+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:03.395055+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77332480 unmapped: 172032 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:04.395188+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77340672 unmapped: 163840 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:05.395313+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:06.395518+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:07.395666+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:08.395792+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:09.395925+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:10.396052+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:11.396217+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:12.396374+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:13.396521+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:14.396646+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:15.396845+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:16.397058+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:17.397191+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77357056 unmapped: 147456 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:18.397399+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:19.397574+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:20.397754+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:21.397947+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:22.398156+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:23.398352+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:24.398535+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:25.398746+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:26.399079+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:27.399266+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:28.399411+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:29.399558+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:30.399694+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:31.399826+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:32.399988+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:33.400117+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:34.400298+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77365248 unmapped: 139264 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:35.400418+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:36.400581+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:37.400687+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:38.400862+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:39.400991+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:40.401143+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:41.401287+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:42.401452+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:43.401599+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77381632 unmapped: 122880 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:44.401722+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 114688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:45.401891+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 114688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:46.402076+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 114688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:47.402253+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 114688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:48.402414+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:49.402537+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:50.402779+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:51.402916+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:52.404099+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:53.404329+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77398016 unmapped: 106496 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:54.404442+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77389824 unmapped: 114688 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:55.404563+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:56.404703+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:57.405150+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:58.405463+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:17:59.405580+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:00.405819+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:01.405954+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:02.406255+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:03.406408+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:04.406645+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:05.406796+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:06.406989+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:07.407164+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:08.407290+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77406208 unmapped: 98304 heap: 77504512 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:09.407435+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: mgrc ms_handle_reset ms_handle_reset con 0x561429476000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/894791725
Jan 20 19:27:29 compute-0 ceph-osd[86022]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/894791725,v1:192.168.122.100:6801/894791725]
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: get_auth_request con 0x56142b551800 auth_method 0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: mgrc handle_mgr_configure stats_period=5
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 811008 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:10.407618+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 811008 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:11.407733+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 811008 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:12.407872+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 811008 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:13.408015+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 811008 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:14.408200+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 811008 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:15.408340+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77742080 unmapped: 811008 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:16.408516+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 ms_handle_reset con 0x561429477000 session 0x5614290f0fc0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x561429daf000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1277952 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:17.408665+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1277952 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:18.408958+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77275136 unmapped: 1277952 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:19.409110+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:20.409256+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:21.409400+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:22.409543+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:23.409687+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:24.409815+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:25.409945+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:26.410070+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:27.410196+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:28.410321+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:29.410452+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:30.410589+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:31.410739+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:32.410881+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:33.410985+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:34.411080+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:35.411228+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:36.411405+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:37.411533+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:38.411672+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:39.411790+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:40.411950+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:41.412098+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:42.412239+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:43.412380+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:44.412509+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:45.412637+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:46.412810+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:47.412939+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:48.413073+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:49.413209+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 996690 data_alloc: 218103808 data_used: 5012
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:50.413464+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:51.413627+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:52.413776+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: handle_auth_request added challenge on 0x56142b97b000
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.932525635s of 300.146911621s, submitted: 106
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: <cls> /ceph/rpmbuild/BUILD/ceph-20.2.0/src/cls/fifo/cls_fifo.cc:366: int rados::cls::fifo::{anonymous}::get_meta(cls_method_context_t, ceph::buffer::v15_2_0::list*, ceph::buffer::v15_2_0::list*)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:53.413935+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:54.414082+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:55.414309+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:56.414592+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:57.414722+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:58.414843+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:18:59.414950+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:00.415073+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:01.415219+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:02.415343+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:03.415466+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:04.415575+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:05.415698+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:06.415843+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:07.416279+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:08.416464+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:09.416588+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:10.416727+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77463552 unmapped: 1089536 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:11.416900+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:12.417063+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:13.417225+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:14.417372+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:15.417475+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:16.417639+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:17.417790+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:18.417926+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:19.418060+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77471744 unmapped: 1081344 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:20.418184+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 1073152 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:21.418299+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 1073152 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:22.418415+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 1073152 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:23.418542+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 1073152 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:24.418685+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77479936 unmapped: 1073152 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:25.418867+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 1064960 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:26.419083+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 1064960 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:27.419215+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 1064960 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:28.419402+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 1064960 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:29.419543+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77488128 unmapped: 1064960 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:30.419701+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:31.419817+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:32.419947+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:33.420123+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:34.420270+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:35.420412+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:36.420620+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:37.420760+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:38.420880+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:39.420999+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:40.421140+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:41.421329+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:42.421466+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:43.421782+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:44.421955+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:45.422093+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:46.422282+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:47.422437+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:48.422565+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:49.422692+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:50.422828+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:51.422957+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:52.423108+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:53.423258+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:54.423403+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:55.423580+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:56.423760+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:57.423876+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:58.424011+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:19:59.424104+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:00.424242+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:01.424428+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:02.424568+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:03.424731+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77496320 unmapped: 1056768 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:04.424872+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:05.425028+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:06.425166+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:07.425294+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:08.425462+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:09.425641+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:10.425797+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:11.425861+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:12.425972+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:13.426138+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:14.426298+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:15.426532+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:16.426744+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:17.426910+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:18.427086+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:19.427231+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:20.427418+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:21.427603+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:22.427750+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77512704 unmapped: 1040384 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:23.427898+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 1032192 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:24.428037+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77520896 unmapped: 1032192 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:25.428192+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 1024000 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:26.428339+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 1024000 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:27.428461+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 1024000 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:28.428595+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 1024000 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:29.428731+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77529088 unmapped: 1024000 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:30.428912+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:31.429058+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:32.429167+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:33.429281+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:34.429422+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:35.429639+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:36.429846+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:37.430003+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:38.430153+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:39.430284+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:40.430451+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:41.430542+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:42.430699+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:43.430870+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:44.431010+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:45.431169+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:46.431322+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:47.431459+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:48.431614+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:49.431763+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:50.431918+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:51.432087+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:52.432547+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:53.432678+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:54.432813+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:55.432947+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:56.433127+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:57.433310+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:58.433501+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:20:59.433681+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:00.433844+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:01.434031+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:02.434215+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:03.434385+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:04.434641+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:05.434806+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:06.434985+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:07.435148+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:08.435355+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:09.435557+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:10.435716+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:11.435908+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:12.436112+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:13.436289+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:14.436428+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:15.436599+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:16.436811+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:17.437172+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:18.437446+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:19.437595+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:20.437734+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:21.437875+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:22.438036+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:23.438191+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:24.438348+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:25.438522+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:26.438680+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:27.438860+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:28.439018+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:29.439164+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:30.439325+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:31.439467+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:32.439625+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:33.439780+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77578240 unmapped: 974848 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:34.439947+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77594624 unmapped: 958464 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:35.440117+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:36.440348+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:37.440573+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:38.440762+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:39.440965+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:40.441084+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:41.441181+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:42.441321+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:43.441450+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77602816 unmapped: 950272 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:44.441597+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:45.441737+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:46.441920+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:47.442128+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:48.442296+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:49.442460+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:50.442632+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:51.442784+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:52.443011+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:53.443168+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:54.443403+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:55.443562+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:56.443765+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:57.443940+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:58.444111+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:21:59.444247+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:00.444418+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:01.444611+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:02.444780+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:03.444924+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77611008 unmapped: 942080 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:04.445057+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:05.445185+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:06.445408+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:07.445656+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:08.446102+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:09.446488+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:10.446791+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:11.447015+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:12.447167+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 sudo[247555]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --env CEPH_VOLUME_OSDSPEC_AFFINITY=default_drive_group --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac --config-json - -- lvm batch --no-auto /dev/ceph_vg0/ceph_lv0 /dev/ceph_vg1/ceph_lv1 /dev/ceph_vg2/ceph_lv2 --objectstore bluestore --yes --no-systemd
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:13.447465+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:14.447963+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:15.448534+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:16.448765+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 sudo[247555]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:17.449005+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:18.449323+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77619200 unmapped: 933888 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:19.449421+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:20.449628+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:21.449892+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:22.450173+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:23.450465+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:24.450654+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:25.450865+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:26.451087+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:27.451283+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:28.451532+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:29.451674+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:30.451842+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:31.452012+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread fragmentation_score=0.000140 took=0.000047s
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:32.452169+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:33.452694+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:34.452937+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:35.453080+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:36.453262+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:37.453401+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:38.453597+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:39.453736+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:40.453977+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:41.454080+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:42.454237+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:43.454431+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:44.454578+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:45.454733+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:46.454923+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 917504 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:47.455151+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 917504 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:48.455288+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77635584 unmapped: 917504 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:49.455462+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:50.455628+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:51.455861+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:52.456025+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:53.456163+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:54.456302+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:55.456428+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:56.456615+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:57.456804+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:58.456953+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:22:59.457126+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:00.457287+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77504512 unmapped: 1048576 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:01.457429+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: [db/db_impl/db_impl.cc:1111] 
                                           ** DB Stats **
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Cumulative writes: 5863 writes, 24K keys, 5863 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s
                                           Cumulative WAL: 5863 writes, 1003 syncs, 5.85 writes per sync, written: 0.02 GB, 0.02 MB/s
                                           Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent
                                           Interval writes: 248 writes, 372 keys, 248 commit groups, 1.0 writes per commit group, ingest: 0.13 MB, 0.00 MB/s
                                           Interval WAL: 248 writes, 124 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s
                                           Interval stall: 00:00:0.000 H:M:S, 0.0 percent
                                           
                                           ** Compaction Stats [default] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [default] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [default] **
                                           
                                           ** Compaction Stats [m-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-0] **
                                           
                                           ** Compaction Stats [m-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-1] **
                                           
                                           ** Compaction Stats [m-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [m-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [m-2] **
                                           
                                           ** Compaction Stats [p-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.56 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.56 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.4      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-0] **
                                           
                                           ** Compaction Stats [p-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-1] **
                                           
                                           ** Compaction Stats [p-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [p-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [p-2] **
                                           
                                           ** Compaction Stats [O-0] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-0] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-0] **
                                           
                                           ** Compaction Stats [O-1] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-1] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-1] **
                                           
                                           ** Compaction Stats [O-2] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      1/0    1.25 KB   0.1      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Sum      1/0    1.25 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [O-2] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x5614276374b0#2 capacity: 224.00 MB usage: 0.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 2 last_secs: 8e-06 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(1,0.20 KB,8.85555e-05%) FilterBlock(1,0.11 KB,4.76837e-05%) IndexBlock(1,0.14 KB,6.13076e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [O-2] **
                                           
                                           ** Compaction Stats [L] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                             L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [L] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         1    0.001       0      0       0.0       0.0
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [L] **
                                           
                                           ** Compaction Stats [P] **
                                           Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                            Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                            Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0
                                           
                                           ** Compaction Stats [P] **
                                           Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)
                                           ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                           
                                           Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0
                                           
                                           Uptime(secs): 1200.0 total, 600.0 interval
                                           Flush(GB): cumulative 0.000, interval 0.000
                                           AddFile(GB): cumulative 0.000, interval 0.000
                                           AddFile(Total Files): cumulative 0, interval 0
                                           AddFile(L0 Files): cumulative 0, interval 0
                                           AddFile(Keys): cumulative 0, interval 0
                                           Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds
                                           Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count
                                           Block cache BinnedLRUCache@0x561427637a30#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 5.2e-05 secs_since: 0
                                           Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)
                                           
                                           ** File Read Latency Histogram By Level [P] **
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:02.457579+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:03.457760+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:04.457892+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77537280 unmapped: 1015808 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:05.458019+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:06.458231+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:07.458455+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:08.458598+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:09.458754+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:10.458961+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:11.459189+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:12.459428+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:13.459594+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:14.460044+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:15.460526+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:16.460735+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77545472 unmapped: 1007616 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _send_mon_message to mon.compute-0 at v2:192.168.122.100:3300/0
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:17.460968+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:18.461155+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:19.461355+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:20.461510+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:21.461669+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:22.461858+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:23.462022+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:24.462193+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:25.462430+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:26.462996+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:27.463309+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:28.463494+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:29.463642+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77561856 unmapped: 991232 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:30.463820+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:31.463996+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:32.464187+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:33.464349+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:34.464573+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:35.464791+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:36.464976+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:37.465163+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:38.465410+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:39.465684+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:40.465837+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:41.465963+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:42.466118+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:43.466241+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:44.466374+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:45.466497+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:46.466680+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:47.466831+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:48.466991+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:49.467138+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:50.467320+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:51.467481+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:52.467659+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77570048 unmapped: 983040 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 299.912536621s of 299.936035156s, submitted: 18
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:53.467824+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77627392 unmapped: 925696 heap: 78553088 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:54.467942+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77832192 unmapped: 1769472 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:55.468050+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:56.468255+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:57.468427+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:58.468490+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:23:59.468622+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:00.468748+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:01.468940+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:02.469112+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:03.469269+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77889536 unmapped: 1712128 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:04.469385+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:05.469510+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:06.469760+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:07.469919+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:08.470048+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:09.470195+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:10.470336+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:11.470471+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:12.470602+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:13.470730+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:14.470861+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:15.471461+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:16.471888+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:17.472040+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:18.472432+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:19.472642+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:20.473271+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:21.473883+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:22.474667+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:23.475020+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:24.475327+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:25.475694+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:26.476182+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:27.476407+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:28.476736+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:29.477141+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:30.477678+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:31.477889+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:32.478045+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:33.478200+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:34.478437+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:35.478621+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:36.478795+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:37.478999+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:38.479212+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:39.479430+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:40.479630+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:41.479845+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:42.480040+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:43.480173+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:44.480319+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:45.480533+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:46.480697+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:47.480933+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:48.481075+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:49.481247+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:50.481371+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:51.481505+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:52.481630+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:53.481788+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:54.481987+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:55.482164+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:56.482439+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:57.482583+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:58.482754+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:24:59.482980+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:00.483109+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:01.483339+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:02.483611+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:03.483835+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77897728 unmapped: 1703936 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:04.484078+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:05.484286+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:06.484548+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:07.484685+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:08.484861+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:09.485055+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:10.485187+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:11.485282+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:12.485439+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:13.485606+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:14.485728+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:15.485881+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:16.486048+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:17.486213+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:18.486421+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:19.486586+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:20.486790+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:21.486965+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:22.487207+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:23.487466+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:24.487697+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:25.487831+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:26.488023+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:27.488212+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:28.488404+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:29.488596+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:30.488758+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:31.488875+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:32.489031+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:33.489190+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:34.489335+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:35.489446+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:36.489630+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:37.489768+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:38.489915+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:39.490070+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:40.490228+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:41.490381+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:42.490551+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:43.490707+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:44.490902+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:45.491125+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:46.491300+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:47.491399+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:48.491551+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:49.491677+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:50.491840+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:51.856605+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:52.856728+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:53.856834+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:54.856965+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:55.857071+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:56.857211+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:57.857327+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:58.857429+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:25:59.857587+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:00.857742+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:01.857869+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:02.858013+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:03.858173+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:04.858323+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:05.858460+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:06.858676+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:07.859004+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:08.859216+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:09.859405+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:10.859516+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:11.859657+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:12.859777+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:13.859990+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:14.860068+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:15.860188+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:16.860385+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:17.860528+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:18.860677+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:19.860879+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:20.861048+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:21.861226+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:22.861343+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:23.861511+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:24.862268+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:25.862584+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:26.862885+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:27.863101+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:28.864000+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:29.864240+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:30.865529+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:31.865744+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:32.865864+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:33.866125+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:34.866605+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:35.866711+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:36.867167+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:37.867330+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:38.867690+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:39.867848+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:40.868077+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:41.868214+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:42.868349+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:43.868476+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:44.868593+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:45.868728+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:46.868901+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:47.869030+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:48.869203+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:49.869334+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:50.869428+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:51.869544+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:52.869669+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:54.419067+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fceb3000/0x0/0x4ffc00000, data 0xb9cb8/0x179000, compress 0x0/0x0/0x0, omap 0x152fe, meta 0x2bbad02), peers [1,2] op hist [])
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:55.419230+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 77922304 unmapped: 1679360 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'config diff' '{prefix=config diff}'
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:56.419414+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'config show' '{prefix=config show}'
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 78249984 unmapped: 1351680 heap: 79601664 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'counter dump' '{prefix=counter dump}'
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'counter schema' '{prefix=counter schema}'
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Jan 20 19:27:29 compute-0 ceph-osd[86022]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Jan 20 19:27:29 compute-0 ceph-osd[86022]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 997074 data_alloc: 218103808 data_used: 5472
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:57.419628+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 78389248 unmapped: 2260992 heap: 80650240 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: tick
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_tickets
Jan 20 19:27:29 compute-0 ceph-osd[86022]: monclient: _check_auth_rotating have uptodate secrets (they expire after 2026-01-20T19:26:58.419752+0000)
Jan 20 19:27:29 compute-0 ceph-osd[86022]: prioritycache tune_memory target: 4294967296 mapped: 78692352 unmapped: 1957888 heap: 80650240 old mem: 2845415832 new mem: 2845415832
Jan 20 19:27:29 compute-0 ceph-osd[86022]: do_command 'log dump' '{prefix=log dump}'
Jan 20 19:27:29 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14524 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} v 0)
Jan 20 19:27:29 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:27:29 compute-0 podman[247621]: 2026-01-20 19:27:29.512376024 +0000 UTC m=+0.048285735 container create 8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ganguly, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_REF=tentacle, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030)
Jan 20 19:27:29 compute-0 podman[247621]: 2026-01-20 19:27:29.486973356 +0000 UTC m=+0.022883087 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:27:29 compute-0 systemd[1]: Started libpod-conmon-8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4.scope.
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='client.14514 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: pgmap v861: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='client.14516 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='client.14518 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='client.14520 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.admin"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "osd tree", "states": ["destroyed"], "format": "json"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "auth get", "entity": "client.bootstrap-osd"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config generate-minimal-conf"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' cmd={"prefix": "config get", "who": "client.rgw.rgw.compute-0.dbzrzk", "name": "rgw_frontends"} : dispatch
Jan 20 19:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:29 compute-0 podman[247621]: 2026-01-20 19:27:29.63718772 +0000 UTC m=+0.173097461 container init 8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ganguly, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:27:29 compute-0 podman[247621]: 2026-01-20 19:27:29.644592971 +0000 UTC m=+0.180502682 container start 8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ganguly, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Jan 20 19:27:29 compute-0 modest_ganguly[247655]: 167 167
Jan 20 19:27:29 compute-0 systemd[1]: libpod-8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4.scope: Deactivated successfully.
Jan 20 19:27:29 compute-0 podman[247621]: 2026-01-20 19:27:29.653141779 +0000 UTC m=+0.189051510 container attach 8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ganguly, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Jan 20 19:27:29 compute-0 podman[247621]: 2026-01-20 19:27:29.654245615 +0000 UTC m=+0.190155346 container died 8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:27:29 compute-0 systemd[1]: var-lib-containers-storage-overlay-7ee2dd1d9305a846b41470222d1511d5bd904c0cef5a821747439a8b7fa5a2aa-merged.mount: Deactivated successfully.
Jan 20 19:27:29 compute-0 podman[247621]: 2026-01-20 19:27:29.708856414 +0000 UTC m=+0.244766115 container remove 8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=modest_ganguly, io.buildah.version=1.41.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Jan 20 19:27:29 compute-0 systemd[1]: libpod-conmon-8e4b28d1f227fe5afc284b20f3dbe4ee622151f5faae9fd1fb856bc1086d5cf4.scope: Deactivated successfully.
Jan 20 19:27:29 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "quorum_status"} v 0)
Jan 20 19:27:29 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4103597183' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 20 19:27:29 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14528 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:29 compute-0 podman[247682]: 2026-01-20 19:27:29.870460606 +0000 UTC m=+0.043376776 container create 2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.41.3)
Jan 20 19:27:29 compute-0 systemd[1]: Started libpod-conmon-2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79.scope.
Jan 20 19:27:29 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:29 compute-0 podman[247682]: 2026-01-20 19:27:29.851035843 +0000 UTC m=+0.023952033 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368aad4d34bb2309d66c2fd5d3eee09184c4295a658f1954a0bc5406ac198f83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368aad4d34bb2309d66c2fd5d3eee09184c4295a658f1954a0bc5406ac198f83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368aad4d34bb2309d66c2fd5d3eee09184c4295a658f1954a0bc5406ac198f83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368aad4d34bb2309d66c2fd5d3eee09184c4295a658f1954a0bc5406ac198f83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:29 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/368aad4d34bb2309d66c2fd5d3eee09184c4295a658f1954a0bc5406ac198f83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:30 compute-0 podman[247682]: 2026-01-20 19:27:30.001271488 +0000 UTC m=+0.174187668 container init 2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_chaplygin, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:27:30 compute-0 podman[247682]: 2026-01-20 19:27:30.009040827 +0000 UTC m=+0.181957017 container start 2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_chaplygin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.license=GPLv2, CEPH_REF=tentacle)
Jan 20 19:27:30 compute-0 podman[247682]: 2026-01-20 19:27:30.104874538 +0000 UTC m=+0.277790718 container attach 2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:27:30 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14531 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions"} v 0)
Jan 20 19:27:30 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1445336670' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 20 19:27:30 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:30 compute-0 nostalgic_chaplygin[247706]: --> passed data devices: 0 physical, 3 LVM
Jan 20 19:27:30 compute-0 nostalgic_chaplygin[247706]: --> All data devices are unavailable
Jan 20 19:27:30 compute-0 systemd[1]: libpod-2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79.scope: Deactivated successfully.
Jan 20 19:27:30 compute-0 podman[247682]: 2026-01-20 19:27:30.478834916 +0000 UTC m=+0.651751096 container died 2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, io.buildah.version=1.41.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=tentacle)
Jan 20 19:27:30 compute-0 systemd[1]: var-lib-containers-storage-overlay-368aad4d34bb2309d66c2fd5d3eee09184c4295a658f1954a0bc5406ac198f83-merged.mount: Deactivated successfully.
Jan 20 19:27:30 compute-0 podman[247682]: 2026-01-20 19:27:30.597167545 +0000 UTC m=+0.770083725 container remove 2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=nostalgic_chaplygin, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:27:30 compute-0 ceph-mon[75120]: from='client.14524 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:30 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/4103597183' entity='client.admin' cmd={"prefix": "quorum_status"} : dispatch
Jan 20 19:27:30 compute-0 ceph-mon[75120]: from='client.14528 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:30 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1445336670' entity='client.admin' cmd={"prefix": "versions"} : dispatch
Jan 20 19:27:30 compute-0 systemd[1]: libpod-conmon-2b6616083c0395ff9d3192d7ef12ee98eda0a1b1883836b7553c83b5791f3e79.scope: Deactivated successfully.
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.623769) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937250623802, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2114, "num_deletes": 251, "total_data_size": 3461147, "memory_usage": 3518584, "flush_reason": "Manual Compaction"}
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937250642736, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3383907, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16363, "largest_seqno": 18476, "table_properties": {"data_size": 3374315, "index_size": 6022, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 20201, "raw_average_key_size": 20, "raw_value_size": 3354813, "raw_average_value_size": 3368, "num_data_blocks": 272, "num_entries": 996, "num_filter_entries": 996, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768937031, "oldest_key_time": 1768937031, "file_creation_time": 1768937250, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 19035 microseconds, and 6452 cpu microseconds.
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.642802) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3383907 bytes OK
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.642820) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.644515) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.644534) EVENT_LOG_v1 {"time_micros": 1768937250644529, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.644553) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3452122, prev total WAL file size 3452122, number of live WAL files 2.
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.645338) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3304KB)], [38(7723KB)]
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937250645453, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11292645, "oldest_snapshot_seqno": -1}
Jan 20 19:27:30 compute-0 sudo[247555]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4514 keys, 9546306 bytes, temperature: kUnknown
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937250697680, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9546306, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9512710, "index_size": 21198, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11333, "raw_key_size": 109214, "raw_average_key_size": 24, "raw_value_size": 9427860, "raw_average_value_size": 2088, "num_data_blocks": 900, "num_entries": 4514, "num_filter_entries": 4514, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1768935724, "oldest_key_time": 0, "file_creation_time": 1768937250, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "a47071cc-b77a-49b8-9d53-e31f11fbdebb", "db_session_id": "09M3MP4DL9LGPOBMD17J", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.698003) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9546306 bytes
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.699563) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 215.7 rd, 182.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.5 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(6.2) write-amplify(2.8) OK, records in: 5028, records dropped: 514 output_compression: NoCompression
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.699578) EVENT_LOG_v1 {"time_micros": 1768937250699571, "job": 18, "event": "compaction_finished", "compaction_time_micros": 52351, "compaction_time_cpu_micros": 20499, "output_level": 6, "num_output_files": 1, "total_output_size": 9546306, "num_input_records": 5028, "num_output_records": 4514, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937250700227, "job": 18, "event": "table_file_deletion", "file_number": 40}
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: EVENT_LOG_v1 {"time_micros": 1768937250701599, "job": 18, "event": "table_file_deletion", "file_number": 38}
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.645245) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.701652) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.701657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.701659) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.701660) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:27:30 compute-0 ceph-mon[75120]: rocksdb: (Original Log Time 2026/01/20-19:27:30.701662) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Jan 20 19:27:30 compute-0 sudo[247840]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:27:30 compute-0 sudo[247840]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:30 compute-0 sudo[247840]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:30 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14534 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:30 compute-0 sudo[247873]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- lvm list --format json
Jan 20 19:27:30 compute-0 sudo[247873]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:30 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Jan 20 19:27:30 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2183081342' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 20 19:27:31 compute-0 podman[247939]: 2026-01-20 19:27:31.131461383 +0000 UTC m=+0.050528910 container create 4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:27:31 compute-0 systemd[1]: Started libpod-conmon-4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a.scope.
Jan 20 19:27:31 compute-0 podman[247939]: 2026-01-20 19:27:31.107561391 +0000 UTC m=+0.026628948 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:27:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:31 compute-0 podman[247939]: 2026-01-20 19:27:31.238678952 +0000 UTC m=+0.157746509 container init 4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20251030)
Jan 20 19:27:31 compute-0 podman[247939]: 2026-01-20 19:27:31.248611423 +0000 UTC m=+0.167678950 container start 4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=tentacle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Jan 20 19:27:31 compute-0 podman[247939]: 2026-01-20 19:27:31.252267063 +0000 UTC m=+0.171334610 container attach 4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20251030, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Jan 20 19:27:31 compute-0 clever_villani[247975]: 167 167
Jan 20 19:27:31 compute-0 systemd[1]: libpod-4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a.scope: Deactivated successfully.
Jan 20 19:27:31 compute-0 podman[247939]: 2026-01-20 19:27:31.254007235 +0000 UTC m=+0.173074782 container died 4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-96006a85b8a88a75ae1ff0e7a73ae7fe119ac16aed8f160499c7d5e78b825a17-merged.mount: Deactivated successfully.
Jan 20 19:27:31 compute-0 podman[247939]: 2026-01-20 19:27:31.294568181 +0000 UTC m=+0.213635708 container remove 4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=clever_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.schema-version=1.0)
Jan 20 19:27:31 compute-0 systemd[1]: libpod-conmon-4dea2a8a2a20f1d0a5acef780450e4339682e6a4090b6202510580ca9e68723a.scope: Deactivated successfully.
Jan 20 19:27:31 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Jan 20 19:27:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/783947475' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 20 19:27:31 compute-0 podman[248002]: 2026-01-20 19:27:31.493755757 +0000 UTC m=+0.053834900 container create f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=tentacle, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:27:31 compute-0 systemd[1]: Started libpod-conmon-f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd.scope.
Jan 20 19:27:31 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1cca282cff6fa15163f690f90aad35f193f553ba02546fd26fb55d8881a813/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:31 compute-0 podman[248002]: 2026-01-20 19:27:31.464735392 +0000 UTC m=+0.024814555 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1cca282cff6fa15163f690f90aad35f193f553ba02546fd26fb55d8881a813/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1cca282cff6fa15163f690f90aad35f193f553ba02546fd26fb55d8881a813/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1cca282cff6fa15163f690f90aad35f193f553ba02546fd26fb55d8881a813/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:31 compute-0 podman[248002]: 2026-01-20 19:27:31.575876695 +0000 UTC m=+0.135955848 container init f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=tentacle)
Jan 20 19:27:31 compute-0 podman[248002]: 2026-01-20 19:27:31.583449969 +0000 UTC m=+0.143529112 container start f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20251030, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Jan 20 19:27:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Optimize plan auto_2026-01-20_19:27:31
Jan 20 19:27:31 compute-0 ceph-mgr[75417]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Jan 20 19:27:31 compute-0 ceph-mgr[75417]: [balancer INFO root] do_upmap
Jan 20 19:27:31 compute-0 ceph-mgr[75417]: [balancer INFO root] pools ['.rgw.root', 'images', 'cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'vms', '.mgr', 'volumes', 'default.rgw.control', 'default.rgw.meta']
Jan 20 19:27:31 compute-0 ceph-mgr[75417]: [balancer INFO root] prepared 0/10 upmap changes
Jan 20 19:27:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:27:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:27:31 compute-0 podman[248002]: 2026-01-20 19:27:31.679557218 +0000 UTC m=+0.239636371 container attach f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banach, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=tentacle)
Jan 20 19:27:31 compute-0 ceph-mon[75120]: from='client.14531 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:31 compute-0 ceph-mon[75120]: pgmap v862: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:31 compute-0 ceph-mon[75120]: from='client.14534 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Jan 20 19:27:31 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/2183081342' entity='client.admin' cmd={"prefix": "health", "detail": "detail", "format": "json-pretty"} : dispatch
Jan 20 19:27:31 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/783947475' entity='client.admin' cmd={"prefix": "osd tree", "format": "json-pretty"} : dispatch
Jan 20 19:27:31 compute-0 ceph-mon[75120]: from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Jan 20 19:27:31 compute-0 ceph-mon[75120]: from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Jan 20 19:27:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:27:31 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:27:31 compute-0 festive_banach[248025]: {
Jan 20 19:27:31 compute-0 festive_banach[248025]:     "0": [
Jan 20 19:27:31 compute-0 festive_banach[248025]:         {
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "devices": [
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "/dev/loop3"
Jan 20 19:27:31 compute-0 festive_banach[248025]:             ],
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_name": "ceph_lv0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_size": "21470642176",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=ea83dc26-7f71-429f-b9c1-f87c51d6aebb,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "name": "ceph_lv0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "path": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "tags": {
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.block_uuid": "tq1csw-Z3ek-2J4M-OZJW-JQWH-SfNt-SDTv3N",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cluster_name": "ceph",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.crush_device_class": "",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.encrypted": "0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.objectstore": "bluestore",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osd_fsid": "ea83dc26-7f71-429f-b9c1-f87c51d6aebb",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osd_id": "0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.type": "block",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.vdo": "0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.with_tpm": "0"
Jan 20 19:27:31 compute-0 festive_banach[248025]:             },
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "type": "block",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "vg_name": "ceph_vg0"
Jan 20 19:27:31 compute-0 festive_banach[248025]:         }
Jan 20 19:27:31 compute-0 festive_banach[248025]:     ],
Jan 20 19:27:31 compute-0 festive_banach[248025]:     "1": [
Jan 20 19:27:31 compute-0 festive_banach[248025]:         {
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "devices": [
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "/dev/loop4"
Jan 20 19:27:31 compute-0 festive_banach[248025]:             ],
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_name": "ceph_lv1",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_size": "21470642176",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=aba2c458-fbc4-4039-bc23-d828faa8f69c,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "name": "ceph_lv1",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "path": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "tags": {
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.block_uuid": "D59KrR-Zt2u-r3qX-Hyn4-eY3f-GMeX-T2UZIe",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cluster_name": "ceph",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.crush_device_class": "",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.encrypted": "0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.objectstore": "bluestore",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osd_fsid": "aba2c458-fbc4-4039-bc23-d828faa8f69c",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osd_id": "1",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.type": "block",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.vdo": "0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.with_tpm": "0"
Jan 20 19:27:31 compute-0 festive_banach[248025]:             },
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "type": "block",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "vg_name": "ceph_vg1"
Jan 20 19:27:31 compute-0 festive_banach[248025]:         }
Jan 20 19:27:31 compute-0 festive_banach[248025]:     ],
Jan 20 19:27:31 compute-0 festive_banach[248025]:     "2": [
Jan 20 19:27:31 compute-0 festive_banach[248025]:         {
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "devices": [
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "/dev/loop5"
Jan 20 19:27:31 compute-0 festive_banach[248025]:             ],
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_name": "ceph_lv2",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_size": "21470642176",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=90fff835-31df-513f-a409-b6642f04e6ac,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.objectstore=bluestore,ceph.osd_fsid=f12cccca-abeb-4720-98f5-dcecf6096427,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "lv_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "name": "ceph_lv2",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "path": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "tags": {
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.block_uuid": "fdzCu2-38yV-HRnt-uxS6-FkAB-9oWW-CrxJy8",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cephx_lockbox_secret": "",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cluster_fsid": "90fff835-31df-513f-a409-b6642f04e6ac",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.cluster_name": "ceph",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.crush_device_class": "",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.encrypted": "0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.objectstore": "bluestore",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osd_fsid": "f12cccca-abeb-4720-98f5-dcecf6096427",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osd_id": "2",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.osdspec_affinity": "default_drive_group",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.type": "block",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.vdo": "0",
Jan 20 19:27:31 compute-0 festive_banach[248025]:                 "ceph.with_tpm": "0"
Jan 20 19:27:31 compute-0 festive_banach[248025]:             },
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "type": "block",
Jan 20 19:27:31 compute-0 festive_banach[248025]:             "vg_name": "ceph_vg2"
Jan 20 19:27:31 compute-0 festive_banach[248025]:         }
Jan 20 19:27:31 compute-0 festive_banach[248025]:     ]
Jan 20 19:27:31 compute-0 festive_banach[248025]: }
Jan 20 19:27:31 compute-0 podman[248002]: 2026-01-20 19:27:31.929046387 +0000 UTC m=+0.489125540 container died f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banach, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20251030, OSD_FLAVOR=default, CEPH_REF=tentacle, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Jan 20 19:27:31 compute-0 systemd[1]: Starting Hostname Service...
Jan 20 19:27:31 compute-0 systemd[1]: libpod-f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd.scope: Deactivated successfully.
Jan 20 19:27:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-9c1cca282cff6fa15163f690f90aad35f193f553ba02546fd26fb55d8881a813-merged.mount: Deactivated successfully.
Jan 20 19:27:31 compute-0 podman[248002]: 2026-01-20 19:27:31.976313027 +0000 UTC m=+0.536392170 container remove f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=festive_banach, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251030, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=tentacle, org.label-schema.vendor=CentOS, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:27:31 compute-0 systemd[1]: libpod-conmon-f22ef7dda1344a08041b754e5da6ea83875791c8fbb8c79d850c18c39c9eb8fd.scope: Deactivated successfully.
Jan 20 19:27:32 compute-0 sudo[247873]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:32 compute-0 systemd[1]: Started Hostname Service.
Jan 20 19:27:32 compute-0 sudo[248136]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/which python3
Jan 20 19:27:32 compute-0 sudo[248136]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:32 compute-0 sudo[248136]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:32 compute-0 sudo[248172]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/python3 /var/lib/ceph/90fff835-31df-513f-a409-b6642f04e6ac/cephadm.ed5a13ad26f7f55dd30e9b63855e4e581fd86973bec1d21a12ed0bb26af19c8b --image quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86 --timeout 895 ceph-volume --fsid 90fff835-31df-513f-a409-b6642f04e6ac -- raw list --format json
Jan 20 19:27:32 compute-0 sudo[248172]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:32 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump"} v 0)
Jan 20 19:27:32 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1921073170' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 20 19:27:32 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:32 compute-0 podman[248260]: 2026-01-20 19:27:32.523558421 +0000 UTC m=+0.037304159 container create b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 20 19:27:32 compute-0 systemd[1]: Started libpod-conmon-b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5.scope.
Jan 20 19:27:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:32 compute-0 podman[248260]: 2026-01-20 19:27:32.59916408 +0000 UTC m=+0.112909838 container init b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20251030, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:27:32 compute-0 podman[248260]: 2026-01-20 19:27:32.505083901 +0000 UTC m=+0.018829659 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:27:32 compute-0 podman[248260]: 2026-01-20 19:27:32.608119607 +0000 UTC m=+0.121865345 container start b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489)
Jan 20 19:27:32 compute-0 musing_mclaren[248278]: 167 167
Jan 20 19:27:32 compute-0 systemd[1]: libpod-b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5.scope: Deactivated successfully.
Jan 20 19:27:32 compute-0 podman[248260]: 2026-01-20 19:27:32.611592772 +0000 UTC m=+0.125338520 container attach b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, CEPH_REF=tentacle, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Jan 20 19:27:32 compute-0 podman[248260]: 2026-01-20 19:27:32.615700372 +0000 UTC m=+0.129446110 container died b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.41.3, CEPH_REF=tentacle, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:27:32 compute-0 systemd[1]: var-lib-containers-storage-overlay-b40d331cbb8e9a170d0f0b747272c86105367bee5a4dc94d1f58646dd69a558f-merged.mount: Deactivated successfully.
Jan 20 19:27:32 compute-0 podman[248260]: 2026-01-20 19:27:32.646463611 +0000 UTC m=+0.160209349 container remove b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=musing_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=tentacle, org.label-schema.vendor=CentOS)
Jan 20 19:27:32 compute-0 systemd[1]: libpod-conmon-b88ab0ff6b2001195841b1438738c8b5e4f843898c77dc61b19779303ed594e5.scope: Deactivated successfully.
Jan 20 19:27:32 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14550 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:32 compute-0 ceph-mon[75120]: from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Jan 20 19:27:32 compute-0 ceph-mon[75120]: from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Jan 20 19:27:32 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1921073170' entity='client.admin' cmd={"prefix": "config dump"} : dispatch
Jan 20 19:27:32 compute-0 podman[248312]: 2026-01-20 19:27:32.810715566 +0000 UTC m=+0.036475648 container create 975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_franklin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=tentacle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Jan 20 19:27:32 compute-0 systemd[1]: Started libpod-conmon-975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6.scope.
Jan 20 19:27:32 compute-0 systemd[1]: Started libcrun container.
Jan 20 19:27:32 compute-0 podman[248312]: 2026-01-20 19:27:32.794067301 +0000 UTC m=+0.019827403 image pull 524f3da276461682bec27427fb8a63b5139c40ad4185939aede197474a6817b3 quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86
Jan 20 19:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eaa892221ed8e742ab1be350c01c1f627b315c449e050650667e0af615ed84/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eaa892221ed8e742ab1be350c01c1f627b315c449e050650667e0af615ed84/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eaa892221ed8e742ab1be350c01c1f627b315c449e050650667e0af615ed84/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/42eaa892221ed8e742ab1be350c01c1f627b315c449e050650667e0af615ed84/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Jan 20 19:27:32 compute-0 podman[248312]: 2026-01-20 19:27:32.92511253 +0000 UTC m=+0.150872642 container init 975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20251030, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.41.3, CEPH_REF=tentacle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:27:32 compute-0 podman[248312]: 2026-01-20 19:27:32.934155689 +0000 UTC m=+0.159915771 container start 975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_franklin, org.label-schema.schema-version=1.0, CEPH_REF=tentacle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 20 19:27:32 compute-0 podman[248312]: 2026-01-20 19:27:32.941425756 +0000 UTC m=+0.167185848 container attach 975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_franklin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=tentacle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, io.buildah.version=1.41.3)
Jan 20 19:27:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Jan 20 19:27:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1868898185' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 20 19:27:33 compute-0 lvm[248527]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Jan 20 19:27:33 compute-0 lvm[248527]: VG ceph_vg1 finished
Jan 20 19:27:33 compute-0 lvm[248526]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Jan 20 19:27:33 compute-0 lvm[248526]: VG ceph_vg0 finished
Jan 20 19:27:33 compute-0 lvm[248529]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Jan 20 19:27:33 compute-0 lvm[248529]: VG ceph_vg2 finished
Jan 20 19:27:33 compute-0 zealous_franklin[248354]: {}
Jan 20 19:27:33 compute-0 ceph-mon[75120]: pgmap v863: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:33 compute-0 ceph-mon[75120]: from='client.14550 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:33 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1868898185' entity='client.admin' cmd={"prefix": "df", "detail": "detail"} : dispatch
Jan 20 19:27:33 compute-0 systemd[1]: libpod-975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6.scope: Deactivated successfully.
Jan 20 19:27:33 compute-0 systemd[1]: libpod-975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6.scope: Consumed 1.311s CPU time.
Jan 20 19:27:33 compute-0 podman[248312]: 2026-01-20 19:27:33.76451774 +0000 UTC m=+0.990277842 container died 975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_franklin, org.label-schema.build-date=20251030, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=tentacle, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Jan 20 19:27:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df"} v 0)
Jan 20 19:27:33 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3595477887' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 20 19:27:33 compute-0 systemd[1]: var-lib-containers-storage-overlay-42eaa892221ed8e742ab1be350c01c1f627b315c449e050650667e0af615ed84-merged.mount: Deactivated successfully.
Jan 20 19:27:33 compute-0 podman[248312]: 2026-01-20 19:27:33.811430692 +0000 UTC m=+1.037190764 container remove 975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6 (image=quay.io/ceph/ceph@sha256:1228c3d05e45fbc068a8c33614e4409b6dac688bcc77369b06009b5830fa8d86, name=zealous_franklin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.41.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=tentacle, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=69f84cc2651aa259a15bc192ddaabd3baba07489, org.label-schema.build-date=20251030, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Jan 20 19:27:33 compute-0 systemd[1]: libpod-conmon-975d74d916a28cb5f2604a27ea87e73c7caa567e32db047dbb7b2fbde18264e6.scope: Deactivated successfully.
Jan 20 19:27:33 compute-0 sudo[248172]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Jan 20 19:27:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:27:33 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Jan 20 19:27:33 compute-0 ceph-mon[75120]: log_channel(audit) log [INF] : from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:27:33 compute-0 sudo[248588]: ceph-admin : PWD=/home/ceph-admin ; USER=root ; COMMAND=/bin/ls /etc/sysctl.d
Jan 20 19:27:33 compute-0 sudo[248588]: pam_unix(sudo:session): session opened for user root(uid=0) by ceph-admin(uid=42477)
Jan 20 19:27:33 compute-0 sudo[248588]: pam_unix(sudo:session): session closed for user root
Jan 20 19:27:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs dump"} v 0)
Jan 20 19:27:34 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3009412111' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:34 compute-0 ceph-mon[75120]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] scanning for idle connections..
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [volumes INFO mgr_util] cleaning up connections: []
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: vms, start_after=
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: volumes, start_after=
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: backups, start_after=
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:27:34 compute-0 ceph-mgr[75417]: [rbd_support INFO root] load_schedules: images, start_after=
Jan 20 19:27:35 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs ls"} v 0)
Jan 20 19:27:35 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21246566' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 20 19:27:35 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:27:35 compute-0 rsyslogd[1007]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 20 19:27:35 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3595477887' entity='client.admin' cmd={"prefix": "df"} : dispatch
Jan 20 19:27:35 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:27:35 compute-0 ceph-mon[75120]: from='mgr.14124 192.168.122.100:0/2208094213' entity='mgr.compute-0.meyjbf' 
Jan 20 19:27:35 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3009412111' entity='client.admin' cmd={"prefix": "fs dump"} : dispatch
Jan 20 19:27:35 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14560 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:36 compute-0 ceph-mon[75120]: pgmap v864: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:36 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/21246566' entity='client.admin' cmd={"prefix": "fs ls"} : dispatch
Jan 20 19:27:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds stat"} v 0)
Jan 20 19:27:36 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3374717685' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 20 19:27:36 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:36 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump"} v 0)
Jan 20 19:27:36 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/884935235' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 20 19:27:36 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14566 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:37 compute-0 ceph-mon[75120]: from='client.14560 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:37 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/3374717685' entity='client.admin' cmd={"prefix": "mds stat"} : dispatch
Jan 20 19:27:37 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/884935235' entity='client.admin' cmd={"prefix": "mon dump"} : dispatch
Jan 20 19:27:37 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Jan 20 19:27:37 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1997733209' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 20 19:27:37 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14570 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:38 compute-0 ceph-mon[75120]: pgmap v865: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:38 compute-0 ceph-mon[75120]: from='client.14566 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:38 compute-0 ceph-mon[75120]: from='client.? 192.168.122.100:0/1997733209' entity='client.admin' cmd={"prefix": "osd blocklist ls"} : dispatch
Jan 20 19:27:38 compute-0 ceph-mgr[75417]: log_channel(audit) log [DBG] : from='client.14572 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Jan 20 19:27:38 compute-0 ceph-mgr[75417]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 461 KiB data, 136 MiB used, 60 GiB / 60 GiB avail
Jan 20 19:27:38 compute-0 ceph-mon[75120]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd dump"} v 0)
Jan 20 19:27:38 compute-0 ceph-mon[75120]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1822159949' entity='client.admin' cmd={"prefix": "osd dump"} : dispatch
